Test Report: KVM_Linux_crio 18647

                    
                      cbf61390ee716906db88190ad6530e4e486e1432:2024-04-16:34045
                    
                

Test fail (30/327)

Order failed test Duration
39 TestAddons/parallel/Ingress 163.62
53 TestAddons/StoppedEnableDisable 154.44
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.72
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.18
172 TestMultiControlPlane/serial/StopSecondaryNode 141.92
174 TestMultiControlPlane/serial/RestartSecondaryNode 60.05
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 371.88
179 TestMultiControlPlane/serial/StopCluster 141.95
239 TestMultiNode/serial/RestartKeepsNodes 333.84
241 TestMultiNode/serial/StopMultiNode 141.62
248 TestPreload 193.56
256 TestKubernetesUpgrade 390.2
293 TestStartStop/group/old-k8s-version/serial/FirstStart 269.22
307 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.41
309 TestStartStop/group/no-preload/serial/Stop 139.04
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
324 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.39
326 TestStartStop/group/old-k8s-version/serial/DeployApp 0.52
327 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 94.87
331 TestStartStop/group/embed-certs/serial/Stop 139.06
334 TestStartStop/group/old-k8s-version/serial/SecondStart 714.33
335 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
337 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.16
338 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.15
339 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.12
340 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.4
341 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 375.4
342 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 543.58
343 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 328.9
344 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 178.2
x
+
TestAddons/parallel/Ingress (163.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-045739 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-045739 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-045739 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [690e4a21-f628-4663-bbca-1e4a84f05ea5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [690e4a21-f628-4663-bbca-1e4a84f05ea5] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 18.005281673s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-045739 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-045739 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.402796461s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-045739 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-045739 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.182
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-045739 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-045739 addons disable ingress-dns --alsologtostderr -v=1: (1.96070542s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-045739 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-045739 addons disable ingress --alsologtostderr -v=1: (8.363396714s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-045739 -n addons-045739
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-045739 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-045739 logs -n 25: (1.647346297s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p download-only-533993                                                                     | download-only-533993 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:38 UTC | 15 Apr 24 23:38 UTC |
	| delete  | -p download-only-513879                                                                     | download-only-513879 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:38 UTC | 15 Apr 24 23:38 UTC |
	| delete  | -p download-only-779412                                                                     | download-only-779412 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:38 UTC | 15 Apr 24 23:38 UTC |
	| delete  | -p download-only-533993                                                                     | download-only-533993 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:38 UTC | 15 Apr 24 23:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-483637 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:38 UTC |                     |
	|         | binary-mirror-483637                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |                |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |                |                     |                     |
	|         | http://127.0.0.1:35999                                                                      |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-483637                                                                     | binary-mirror-483637 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:38 UTC | 15 Apr 24 23:38 UTC |
	| addons  | disable dashboard -p                                                                        | addons-045739        | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:38 UTC |                     |
	|         | addons-045739                                                                               |                      |         |                |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-045739        | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:38 UTC |                     |
	|         | addons-045739                                                                               |                      |         |                |                     |                     |
	| start   | -p addons-045739 --wait=true                                                                | addons-045739        | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:38 UTC | 15 Apr 24 23:42 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |                |                     |                     |
	|         | --addons=registry                                                                           |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |                |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |                |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |                |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |                |                     |                     |
	| addons  | addons-045739 addons                                                                        | addons-045739        | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:42 UTC | 15 Apr 24 23:42 UTC |
	|         | disable metrics-server                                                                      |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-045739        | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:42 UTC | 15 Apr 24 23:42 UTC |
	|         | addons-045739                                                                               |                      |         |                |                     |                     |
	| addons  | addons-045739 addons disable                                                                | addons-045739        | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:42 UTC | 15 Apr 24 23:42 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| ip      | addons-045739 ip                                                                            | addons-045739        | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:42 UTC | 15 Apr 24 23:42 UTC |
	| addons  | addons-045739 addons disable                                                                | addons-045739        | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:42 UTC | 15 Apr 24 23:42 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-045739        | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:42 UTC | 15 Apr 24 23:42 UTC |
	|         | -p addons-045739                                                                            |                      |         |                |                     |                     |
	| addons  | enable headlamp                                                                             | addons-045739        | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:42 UTC | 15 Apr 24 23:42 UTC |
	|         | -p addons-045739                                                                            |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-045739        | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:42 UTC | 15 Apr 24 23:42 UTC |
	|         | addons-045739                                                                               |                      |         |                |                     |                     |
	| ssh     | addons-045739 ssh curl -s                                                                   | addons-045739        | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:42 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |                |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |                |                     |                     |
	| ssh     | addons-045739 ssh cat                                                                       | addons-045739        | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:43 UTC | 15 Apr 24 23:43 UTC |
	|         | /opt/local-path-provisioner/pvc-f01537f6-92ca-4150-b63c-0f2e634b097f_default_test-pvc/file1 |                      |         |                |                     |                     |
	| addons  | addons-045739 addons disable                                                                | addons-045739        | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:43 UTC | 15 Apr 24 23:43 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-045739 addons                                                                        | addons-045739        | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:43 UTC | 15 Apr 24 23:43 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| addons  | addons-045739 addons                                                                        | addons-045739        | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:43 UTC | 15 Apr 24 23:43 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |                |                     |                     |
	| ip      | addons-045739 ip                                                                            | addons-045739        | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:45 UTC | 15 Apr 24 23:45 UTC |
	| addons  | addons-045739 addons disable                                                                | addons-045739        | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:45 UTC | 15 Apr 24 23:45 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |                |                     |                     |
	|         | -v=1                                                                                        |                      |         |                |                     |                     |
	| addons  | addons-045739 addons disable                                                                | addons-045739        | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:45 UTC | 15 Apr 24 23:45 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |                |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 23:38:37
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 23:38:37.322715   15850 out.go:291] Setting OutFile to fd 1 ...
	I0415 23:38:37.323093   15850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:38:37.323254   15850 out.go:304] Setting ErrFile to fd 2...
	I0415 23:38:37.323272   15850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:38:37.323501   15850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0415 23:38:37.324227   15850 out.go:298] Setting JSON to false
	I0415 23:38:37.325247   15850 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1261,"bootTime":1713223056,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 23:38:37.325317   15850 start.go:139] virtualization: kvm guest
	I0415 23:38:37.328456   15850 out.go:177] * [addons-045739] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0415 23:38:37.330422   15850 notify.go:220] Checking for updates...
	I0415 23:38:37.330437   15850 out.go:177]   - MINIKUBE_LOCATION=18647
	I0415 23:38:37.332318   15850 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 23:38:37.334186   15850 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0415 23:38:37.335866   15850 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:38:37.337378   15850 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0415 23:38:37.338907   15850 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 23:38:37.340573   15850 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 23:38:37.378895   15850 out.go:177] * Using the kvm2 driver based on user configuration
	I0415 23:38:37.380517   15850 start.go:297] selected driver: kvm2
	I0415 23:38:37.380533   15850 start.go:901] validating driver "kvm2" against <nil>
	I0415 23:38:37.380545   15850 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 23:38:37.381352   15850 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 23:38:37.381461   15850 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0415 23:38:37.398746   15850 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0415 23:38:37.398836   15850 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 23:38:37.399052   15850 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 23:38:37.399102   15850 cni.go:84] Creating CNI manager for ""
	I0415 23:38:37.399115   15850 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0415 23:38:37.399125   15850 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 23:38:37.399177   15850 start.go:340] cluster config:
	{Name:addons-045739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-045739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:38:37.399277   15850 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 23:38:37.401898   15850 out.go:177] * Starting "addons-045739" primary control-plane node in "addons-045739" cluster
	I0415 23:38:37.403314   15850 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0415 23:38:37.403400   15850 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0415 23:38:37.403417   15850 cache.go:56] Caching tarball of preloaded images
	I0415 23:38:37.403533   15850 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0415 23:38:37.403549   15850 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0415 23:38:37.403893   15850 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/config.json ...
	I0415 23:38:37.403924   15850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/config.json: {Name:mkc2d1d4abac777c5afa236da0615b266d067887 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:38:37.404106   15850 start.go:360] acquireMachinesLock for addons-045739: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 23:38:37.404179   15850 start.go:364] duration metric: took 53.752µs to acquireMachinesLock for "addons-045739"
	I0415 23:38:37.404205   15850 start.go:93] Provisioning new machine with config: &{Name:addons-045739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:addons-045739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0415 23:38:37.404313   15850 start.go:125] createHost starting for "" (driver="kvm2")
	I0415 23:38:37.406844   15850 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0415 23:38:37.407035   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:38:37.407090   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:38:37.424599   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36543
	I0415 23:38:37.425275   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:38:37.426136   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:38:37.426175   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:38:37.426615   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:38:37.427035   15850 main.go:141] libmachine: (addons-045739) Calling .GetMachineName
	I0415 23:38:37.427326   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:38:37.427568   15850 start.go:159] libmachine.API.Create for "addons-045739" (driver="kvm2")
	I0415 23:38:37.427605   15850 client.go:168] LocalClient.Create starting
	I0415 23:38:37.427666   15850 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem
	I0415 23:38:37.706808   15850 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem
	I0415 23:38:37.854716   15850 main.go:141] libmachine: Running pre-create checks...
	I0415 23:38:37.854744   15850 main.go:141] libmachine: (addons-045739) Calling .PreCreateCheck
	I0415 23:38:37.855447   15850 main.go:141] libmachine: (addons-045739) Calling .GetConfigRaw
	I0415 23:38:37.856049   15850 main.go:141] libmachine: Creating machine...
	I0415 23:38:37.856068   15850 main.go:141] libmachine: (addons-045739) Calling .Create
	I0415 23:38:37.856361   15850 main.go:141] libmachine: (addons-045739) Creating KVM machine...
	I0415 23:38:37.858759   15850 main.go:141] libmachine: (addons-045739) DBG | found existing default KVM network
	I0415 23:38:37.859952   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:38:37.859696   15872 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0415 23:38:37.860061   15850 main.go:141] libmachine: (addons-045739) DBG | created network xml: 
	I0415 23:38:37.860089   15850 main.go:141] libmachine: (addons-045739) DBG | <network>
	I0415 23:38:37.860098   15850 main.go:141] libmachine: (addons-045739) DBG |   <name>mk-addons-045739</name>
	I0415 23:38:37.860105   15850 main.go:141] libmachine: (addons-045739) DBG |   <dns enable='no'/>
	I0415 23:38:37.860112   15850 main.go:141] libmachine: (addons-045739) DBG |   
	I0415 23:38:37.860119   15850 main.go:141] libmachine: (addons-045739) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0415 23:38:37.860132   15850 main.go:141] libmachine: (addons-045739) DBG |     <dhcp>
	I0415 23:38:37.860138   15850 main.go:141] libmachine: (addons-045739) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0415 23:38:37.860149   15850 main.go:141] libmachine: (addons-045739) DBG |     </dhcp>
	I0415 23:38:37.860155   15850 main.go:141] libmachine: (addons-045739) DBG |   </ip>
	I0415 23:38:37.860162   15850 main.go:141] libmachine: (addons-045739) DBG |   
	I0415 23:38:37.860180   15850 main.go:141] libmachine: (addons-045739) DBG | </network>
	I0415 23:38:37.860207   15850 main.go:141] libmachine: (addons-045739) DBG | 
	I0415 23:38:37.867441   15850 main.go:141] libmachine: (addons-045739) DBG | trying to create private KVM network mk-addons-045739 192.168.39.0/24...
	I0415 23:38:37.955726   15850 main.go:141] libmachine: (addons-045739) Setting up store path in /home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739 ...
	I0415 23:38:37.955800   15850 main.go:141] libmachine: (addons-045739) Building disk image from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso
	I0415 23:38:37.955814   15850 main.go:141] libmachine: (addons-045739) DBG | private KVM network mk-addons-045739 192.168.39.0/24 created
	I0415 23:38:37.955834   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:38:37.955584   15872 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:38:37.955857   15850 main.go:141] libmachine: (addons-045739) Downloading /home/jenkins/minikube-integration/18647-7542/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 23:38:38.246706   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:38:38.246570   15872 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa...
	I0415 23:38:38.708780   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:38:38.708605   15872 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/addons-045739.rawdisk...
	I0415 23:38:38.708817   15850 main.go:141] libmachine: (addons-045739) DBG | Writing magic tar header
	I0415 23:38:38.708832   15850 main.go:141] libmachine: (addons-045739) DBG | Writing SSH key tar header
	I0415 23:38:38.708847   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:38:38.708753   15872 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739 ...
	I0415 23:38:38.708899   15850 main.go:141] libmachine: (addons-045739) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739 (perms=drwx------)
	I0415 23:38:38.708945   15850 main.go:141] libmachine: (addons-045739) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739
	I0415 23:38:38.708964   15850 main.go:141] libmachine: (addons-045739) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines (perms=drwxr-xr-x)
	I0415 23:38:38.708983   15850 main.go:141] libmachine: (addons-045739) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube (perms=drwxr-xr-x)
	I0415 23:38:38.708994   15850 main.go:141] libmachine: (addons-045739) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542 (perms=drwxrwxr-x)
	I0415 23:38:38.709009   15850 main.go:141] libmachine: (addons-045739) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0415 23:38:38.709021   15850 main.go:141] libmachine: (addons-045739) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0415 23:38:38.709036   15850 main.go:141] libmachine: (addons-045739) Creating domain...
	I0415 23:38:38.709098   15850 main.go:141] libmachine: (addons-045739) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines
	I0415 23:38:38.709182   15850 main.go:141] libmachine: (addons-045739) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:38:38.709206   15850 main.go:141] libmachine: (addons-045739) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542
	I0415 23:38:38.709241   15850 main.go:141] libmachine: (addons-045739) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0415 23:38:38.709261   15850 main.go:141] libmachine: (addons-045739) DBG | Checking permissions on dir: /home/jenkins
	I0415 23:38:38.709278   15850 main.go:141] libmachine: (addons-045739) DBG | Checking permissions on dir: /home
	I0415 23:38:38.709291   15850 main.go:141] libmachine: (addons-045739) DBG | Skipping /home - not owner
	I0415 23:38:38.710874   15850 main.go:141] libmachine: (addons-045739) define libvirt domain using xml: 
	I0415 23:38:38.710912   15850 main.go:141] libmachine: (addons-045739) <domain type='kvm'>
	I0415 23:38:38.710925   15850 main.go:141] libmachine: (addons-045739)   <name>addons-045739</name>
	I0415 23:38:38.710936   15850 main.go:141] libmachine: (addons-045739)   <memory unit='MiB'>4000</memory>
	I0415 23:38:38.710947   15850 main.go:141] libmachine: (addons-045739)   <vcpu>2</vcpu>
	I0415 23:38:38.710962   15850 main.go:141] libmachine: (addons-045739)   <features>
	I0415 23:38:38.710999   15850 main.go:141] libmachine: (addons-045739)     <acpi/>
	I0415 23:38:38.711028   15850 main.go:141] libmachine: (addons-045739)     <apic/>
	I0415 23:38:38.711041   15850 main.go:141] libmachine: (addons-045739)     <pae/>
	I0415 23:38:38.711047   15850 main.go:141] libmachine: (addons-045739)     
	I0415 23:38:38.711056   15850 main.go:141] libmachine: (addons-045739)   </features>
	I0415 23:38:38.711080   15850 main.go:141] libmachine: (addons-045739)   <cpu mode='host-passthrough'>
	I0415 23:38:38.711089   15850 main.go:141] libmachine: (addons-045739)   
	I0415 23:38:38.711098   15850 main.go:141] libmachine: (addons-045739)   </cpu>
	I0415 23:38:38.711106   15850 main.go:141] libmachine: (addons-045739)   <os>
	I0415 23:38:38.711112   15850 main.go:141] libmachine: (addons-045739)     <type>hvm</type>
	I0415 23:38:38.711118   15850 main.go:141] libmachine: (addons-045739)     <boot dev='cdrom'/>
	I0415 23:38:38.711126   15850 main.go:141] libmachine: (addons-045739)     <boot dev='hd'/>
	I0415 23:38:38.711132   15850 main.go:141] libmachine: (addons-045739)     <bootmenu enable='no'/>
	I0415 23:38:38.711139   15850 main.go:141] libmachine: (addons-045739)   </os>
	I0415 23:38:38.711144   15850 main.go:141] libmachine: (addons-045739)   <devices>
	I0415 23:38:38.711157   15850 main.go:141] libmachine: (addons-045739)     <disk type='file' device='cdrom'>
	I0415 23:38:38.711169   15850 main.go:141] libmachine: (addons-045739)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/boot2docker.iso'/>
	I0415 23:38:38.711177   15850 main.go:141] libmachine: (addons-045739)       <target dev='hdc' bus='scsi'/>
	I0415 23:38:38.711186   15850 main.go:141] libmachine: (addons-045739)       <readonly/>
	I0415 23:38:38.711195   15850 main.go:141] libmachine: (addons-045739)     </disk>
	I0415 23:38:38.711202   15850 main.go:141] libmachine: (addons-045739)     <disk type='file' device='disk'>
	I0415 23:38:38.711211   15850 main.go:141] libmachine: (addons-045739)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0415 23:38:38.711222   15850 main.go:141] libmachine: (addons-045739)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/addons-045739.rawdisk'/>
	I0415 23:38:38.711230   15850 main.go:141] libmachine: (addons-045739)       <target dev='hda' bus='virtio'/>
	I0415 23:38:38.711238   15850 main.go:141] libmachine: (addons-045739)     </disk>
	I0415 23:38:38.711246   15850 main.go:141] libmachine: (addons-045739)     <interface type='network'>
	I0415 23:38:38.711279   15850 main.go:141] libmachine: (addons-045739)       <source network='mk-addons-045739'/>
	I0415 23:38:38.711309   15850 main.go:141] libmachine: (addons-045739)       <model type='virtio'/>
	I0415 23:38:38.711323   15850 main.go:141] libmachine: (addons-045739)     </interface>
	I0415 23:38:38.711333   15850 main.go:141] libmachine: (addons-045739)     <interface type='network'>
	I0415 23:38:38.711348   15850 main.go:141] libmachine: (addons-045739)       <source network='default'/>
	I0415 23:38:38.711365   15850 main.go:141] libmachine: (addons-045739)       <model type='virtio'/>
	I0415 23:38:38.711379   15850 main.go:141] libmachine: (addons-045739)     </interface>
	I0415 23:38:38.711409   15850 main.go:141] libmachine: (addons-045739)     <serial type='pty'>
	I0415 23:38:38.711423   15850 main.go:141] libmachine: (addons-045739)       <target port='0'/>
	I0415 23:38:38.711442   15850 main.go:141] libmachine: (addons-045739)     </serial>
	I0415 23:38:38.711455   15850 main.go:141] libmachine: (addons-045739)     <console type='pty'>
	I0415 23:38:38.711476   15850 main.go:141] libmachine: (addons-045739)       <target type='serial' port='0'/>
	I0415 23:38:38.711489   15850 main.go:141] libmachine: (addons-045739)     </console>
	I0415 23:38:38.711509   15850 main.go:141] libmachine: (addons-045739)     <rng model='virtio'>
	I0415 23:38:38.711523   15850 main.go:141] libmachine: (addons-045739)       <backend model='random'>/dev/random</backend>
	I0415 23:38:38.711538   15850 main.go:141] libmachine: (addons-045739)     </rng>
	I0415 23:38:38.711546   15850 main.go:141] libmachine: (addons-045739)     
	I0415 23:38:38.711555   15850 main.go:141] libmachine: (addons-045739)     
	I0415 23:38:38.711566   15850 main.go:141] libmachine: (addons-045739)   </devices>
	I0415 23:38:38.711588   15850 main.go:141] libmachine: (addons-045739) </domain>
	I0415 23:38:38.711604   15850 main.go:141] libmachine: (addons-045739) 
	I0415 23:38:38.718889   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:0f:82:7e in network default
	I0415 23:38:38.719798   15850 main.go:141] libmachine: (addons-045739) Ensuring networks are active...
	I0415 23:38:38.719837   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:38:38.720893   15850 main.go:141] libmachine: (addons-045739) Ensuring network default is active
	I0415 23:38:38.721433   15850 main.go:141] libmachine: (addons-045739) Ensuring network mk-addons-045739 is active
	I0415 23:38:38.722184   15850 main.go:141] libmachine: (addons-045739) Getting domain xml...
	I0415 23:38:38.723109   15850 main.go:141] libmachine: (addons-045739) Creating domain...
	I0415 23:38:40.128124   15850 main.go:141] libmachine: (addons-045739) Waiting to get IP...
	I0415 23:38:40.129230   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:38:40.129737   15850 main.go:141] libmachine: (addons-045739) DBG | unable to find current IP address of domain addons-045739 in network mk-addons-045739
	I0415 23:38:40.129804   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:38:40.129730   15872 retry.go:31] will retry after 252.732991ms: waiting for machine to come up
	I0415 23:38:40.384898   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:38:40.385466   15850 main.go:141] libmachine: (addons-045739) DBG | unable to find current IP address of domain addons-045739 in network mk-addons-045739
	I0415 23:38:40.385496   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:38:40.385380   15872 retry.go:31] will retry after 245.10915ms: waiting for machine to come up
	I0415 23:38:40.631956   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:38:40.632568   15850 main.go:141] libmachine: (addons-045739) DBG | unable to find current IP address of domain addons-045739 in network mk-addons-045739
	I0415 23:38:40.632598   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:38:40.632513   15872 retry.go:31] will retry after 450.457294ms: waiting for machine to come up
	I0415 23:38:41.084221   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:38:41.084677   15850 main.go:141] libmachine: (addons-045739) DBG | unable to find current IP address of domain addons-045739 in network mk-addons-045739
	I0415 23:38:41.084707   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:38:41.084619   15872 retry.go:31] will retry after 485.094924ms: waiting for machine to come up
	I0415 23:38:41.570948   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:38:41.571346   15850 main.go:141] libmachine: (addons-045739) DBG | unable to find current IP address of domain addons-045739 in network mk-addons-045739
	I0415 23:38:41.571396   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:38:41.571302   15872 retry.go:31] will retry after 737.990323ms: waiting for machine to come up
	I0415 23:38:42.311236   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:38:42.311656   15850 main.go:141] libmachine: (addons-045739) DBG | unable to find current IP address of domain addons-045739 in network mk-addons-045739
	I0415 23:38:42.311683   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:38:42.311592   15872 retry.go:31] will retry after 952.905584ms: waiting for machine to come up
	I0415 23:38:43.266005   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:38:43.266631   15850 main.go:141] libmachine: (addons-045739) DBG | unable to find current IP address of domain addons-045739 in network mk-addons-045739
	I0415 23:38:43.266669   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:38:43.266563   15872 retry.go:31] will retry after 973.047823ms: waiting for machine to come up
	I0415 23:38:44.241053   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:38:44.241492   15850 main.go:141] libmachine: (addons-045739) DBG | unable to find current IP address of domain addons-045739 in network mk-addons-045739
	I0415 23:38:44.241525   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:38:44.241445   15872 retry.go:31] will retry after 1.054753188s: waiting for machine to come up
	I0415 23:38:45.297882   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:38:45.298369   15850 main.go:141] libmachine: (addons-045739) DBG | unable to find current IP address of domain addons-045739 in network mk-addons-045739
	I0415 23:38:45.298398   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:38:45.298318   15872 retry.go:31] will retry after 1.506120008s: waiting for machine to come up
	I0415 23:38:46.807226   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:38:46.807869   15850 main.go:141] libmachine: (addons-045739) DBG | unable to find current IP address of domain addons-045739 in network mk-addons-045739
	I0415 23:38:46.807907   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:38:46.807822   15872 retry.go:31] will retry after 1.669434306s: waiting for machine to come up
	I0415 23:38:48.479290   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:38:48.480000   15850 main.go:141] libmachine: (addons-045739) DBG | unable to find current IP address of domain addons-045739 in network mk-addons-045739
	I0415 23:38:48.480034   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:38:48.479939   15872 retry.go:31] will retry after 2.002935873s: waiting for machine to come up
	I0415 23:38:50.485390   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:38:50.485909   15850 main.go:141] libmachine: (addons-045739) DBG | unable to find current IP address of domain addons-045739 in network mk-addons-045739
	I0415 23:38:50.485945   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:38:50.485846   15872 retry.go:31] will retry after 2.759850861s: waiting for machine to come up
	I0415 23:38:53.247439   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:38:53.247959   15850 main.go:141] libmachine: (addons-045739) DBG | unable to find current IP address of domain addons-045739 in network mk-addons-045739
	I0415 23:38:53.247980   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:38:53.247916   15872 retry.go:31] will retry after 2.768927597s: waiting for machine to come up
	I0415 23:38:56.020174   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:38:56.020748   15850 main.go:141] libmachine: (addons-045739) DBG | unable to find current IP address of domain addons-045739 in network mk-addons-045739
	I0415 23:38:56.020786   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:38:56.020650   15872 retry.go:31] will retry after 4.02305294s: waiting for machine to come up
	I0415 23:39:00.047021   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:00.047701   15850 main.go:141] libmachine: (addons-045739) DBG | unable to find current IP address of domain addons-045739 in network mk-addons-045739
	I0415 23:39:00.047721   15850 main.go:141] libmachine: (addons-045739) DBG | I0415 23:39:00.047656   15872 retry.go:31] will retry after 4.502634581s: waiting for machine to come up
	I0415 23:39:04.553248   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:04.553873   15850 main.go:141] libmachine: (addons-045739) Found IP for machine: 192.168.39.182
	I0415 23:39:04.553892   15850 main.go:141] libmachine: (addons-045739) Reserving static IP address...
	I0415 23:39:04.553901   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has current primary IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:04.554380   15850 main.go:141] libmachine: (addons-045739) DBG | unable to find host DHCP lease matching {name: "addons-045739", mac: "52:54:00:f7:76:ed", ip: "192.168.39.182"} in network mk-addons-045739
	I0415 23:39:04.643123   15850 main.go:141] libmachine: (addons-045739) DBG | Getting to WaitForSSH function...
	I0415 23:39:04.643170   15850 main.go:141] libmachine: (addons-045739) Reserved static IP address: 192.168.39.182
	I0415 23:39:04.643187   15850 main.go:141] libmachine: (addons-045739) Waiting for SSH to be available...
	I0415 23:39:04.645872   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:04.646241   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:04.646267   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:04.646492   15850 main.go:141] libmachine: (addons-045739) DBG | Using SSH client type: external
	I0415 23:39:04.646533   15850 main.go:141] libmachine: (addons-045739) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa (-rw-------)
	I0415 23:39:04.646570   15850 main.go:141] libmachine: (addons-045739) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.182 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0415 23:39:04.646595   15850 main.go:141] libmachine: (addons-045739) DBG | About to run SSH command:
	I0415 23:39:04.646609   15850 main.go:141] libmachine: (addons-045739) DBG | exit 0
	I0415 23:39:04.786671   15850 main.go:141] libmachine: (addons-045739) DBG | SSH cmd err, output: <nil>: 
	I0415 23:39:04.786976   15850 main.go:141] libmachine: (addons-045739) KVM machine creation complete!
	I0415 23:39:04.787481   15850 main.go:141] libmachine: (addons-045739) Calling .GetConfigRaw
	I0415 23:39:04.788078   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:04.788644   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:04.788940   15850 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0415 23:39:04.788971   15850 main.go:141] libmachine: (addons-045739) Calling .GetState
	I0415 23:39:04.790780   15850 main.go:141] libmachine: Detecting operating system of created instance...
	I0415 23:39:04.790797   15850 main.go:141] libmachine: Waiting for SSH to be available...
	I0415 23:39:04.790803   15850 main.go:141] libmachine: Getting to WaitForSSH function...
	I0415 23:39:04.790809   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:04.794203   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:04.794816   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:04.794846   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:04.795058   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:04.795295   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:04.795467   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:04.795714   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:04.796045   15850 main.go:141] libmachine: Using SSH client type: native
	I0415 23:39:04.796228   15850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0415 23:39:04.796243   15850 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0415 23:39:04.917732   15850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 23:39:04.917767   15850 main.go:141] libmachine: Detecting the provisioner...
	I0415 23:39:04.917779   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:04.922640   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:04.923128   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:04.923157   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:04.923514   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:04.923855   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:04.924094   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:04.924504   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:04.924796   15850 main.go:141] libmachine: Using SSH client type: native
	I0415 23:39:04.925015   15850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0415 23:39:04.925029   15850 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0415 23:39:05.047711   15850 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0415 23:39:05.047774   15850 main.go:141] libmachine: found compatible host: buildroot
	I0415 23:39:05.047781   15850 main.go:141] libmachine: Provisioning with buildroot...
	I0415 23:39:05.047789   15850 main.go:141] libmachine: (addons-045739) Calling .GetMachineName
	I0415 23:39:05.048053   15850 buildroot.go:166] provisioning hostname "addons-045739"
	I0415 23:39:05.048073   15850 main.go:141] libmachine: (addons-045739) Calling .GetMachineName
	I0415 23:39:05.048304   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:05.051599   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:05.052221   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:05.052260   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:05.052516   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:05.052769   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:05.053117   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:05.053398   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:05.053644   15850 main.go:141] libmachine: Using SSH client type: native
	I0415 23:39:05.053830   15850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0415 23:39:05.053844   15850 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-045739 && echo "addons-045739" | sudo tee /etc/hostname
	I0415 23:39:05.191623   15850 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-045739
	
	I0415 23:39:05.191658   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:05.195167   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:05.195853   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:05.195900   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:05.196188   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:05.196475   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:05.196779   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:05.197085   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:05.197360   15850 main.go:141] libmachine: Using SSH client type: native
	I0415 23:39:05.197554   15850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0415 23:39:05.197577   15850 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-045739' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-045739/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-045739' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 23:39:05.329349   15850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 23:39:05.329392   15850 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0415 23:39:05.329425   15850 buildroot.go:174] setting up certificates
	I0415 23:39:05.329456   15850 provision.go:84] configureAuth start
	I0415 23:39:05.329466   15850 main.go:141] libmachine: (addons-045739) Calling .GetMachineName
	I0415 23:39:05.329789   15850 main.go:141] libmachine: (addons-045739) Calling .GetIP
	I0415 23:39:05.333222   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:05.333720   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:05.333756   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:05.334172   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:05.337647   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:05.338122   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:05.338152   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:05.338369   15850 provision.go:143] copyHostCerts
	I0415 23:39:05.338493   15850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0415 23:39:05.338829   15850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0415 23:39:05.339015   15850 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0415 23:39:05.339098   15850 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.addons-045739 san=[127.0.0.1 192.168.39.182 addons-045739 localhost minikube]
	I0415 23:39:05.542954   15850 provision.go:177] copyRemoteCerts
	I0415 23:39:05.543024   15850 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 23:39:05.543049   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:05.546750   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:05.547092   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:05.547140   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:05.547373   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:05.547648   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:05.547825   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:05.548022   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:05.643334   15850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 23:39:05.675764   15850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0415 23:39:05.707606   15850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0415 23:39:05.740345   15850 provision.go:87] duration metric: took 410.873745ms to configureAuth
	I0415 23:39:05.740381   15850 buildroot.go:189] setting minikube options for container-runtime
	I0415 23:39:05.740634   15850 config.go:182] Loaded profile config "addons-045739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:39:05.740747   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:05.745668   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:05.746206   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:05.746292   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:05.746467   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:05.746773   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:05.747115   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:05.747392   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:05.747659   15850 main.go:141] libmachine: Using SSH client type: native
	I0415 23:39:05.747875   15850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0415 23:39:05.747921   15850 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0415 23:39:06.071147   15850 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0415 23:39:06.071180   15850 main.go:141] libmachine: Checking connection to Docker...
	I0415 23:39:06.071193   15850 main.go:141] libmachine: (addons-045739) Calling .GetURL
	I0415 23:39:06.072644   15850 main.go:141] libmachine: (addons-045739) DBG | Using libvirt version 6000000
	I0415 23:39:06.076474   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:06.076874   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:06.076902   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:06.077278   15850 main.go:141] libmachine: Docker is up and running!
	I0415 23:39:06.077307   15850 main.go:141] libmachine: Reticulating splines...
	I0415 23:39:06.077318   15850 client.go:171] duration metric: took 28.649698047s to LocalClient.Create
	I0415 23:39:06.077363   15850 start.go:167] duration metric: took 28.649795618s to libmachine.API.Create "addons-045739"
	I0415 23:39:06.077401   15850 start.go:293] postStartSetup for "addons-045739" (driver="kvm2")
	I0415 23:39:06.077420   15850 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 23:39:06.077450   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:06.077772   15850 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 23:39:06.077803   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:06.080806   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:06.081229   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:06.081269   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:06.081453   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:06.081733   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:06.082038   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:06.082230   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:06.175202   15850 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 23:39:06.180934   15850 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 23:39:06.180965   15850 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0415 23:39:06.181047   15850 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0415 23:39:06.181083   15850 start.go:296] duration metric: took 103.671435ms for postStartSetup
	I0415 23:39:06.181126   15850 main.go:141] libmachine: (addons-045739) Calling .GetConfigRaw
	I0415 23:39:06.181797   15850 main.go:141] libmachine: (addons-045739) Calling .GetIP
	I0415 23:39:06.186271   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:06.186901   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:06.186936   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:06.187348   15850 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/config.json ...
	I0415 23:39:06.187633   15850 start.go:128] duration metric: took 28.783301377s to createHost
	I0415 23:39:06.187663   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:06.190970   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:06.191592   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:06.191659   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:06.191807   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:06.192167   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:06.192553   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:06.192824   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:06.193141   15850 main.go:141] libmachine: Using SSH client type: native
	I0415 23:39:06.193382   15850 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.182 22 <nil> <nil>}
	I0415 23:39:06.193403   15850 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 23:39:06.315364   15850 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713224346.299927271
	
	I0415 23:39:06.315397   15850 fix.go:216] guest clock: 1713224346.299927271
	I0415 23:39:06.315407   15850 fix.go:229] Guest: 2024-04-15 23:39:06.299927271 +0000 UTC Remote: 2024-04-15 23:39:06.187650076 +0000 UTC m=+28.920933439 (delta=112.277195ms)
	I0415 23:39:06.315466   15850 fix.go:200] guest clock delta is within tolerance: 112.277195ms
	I0415 23:39:06.315472   15850 start.go:83] releasing machines lock for "addons-045739", held for 28.91128104s
	I0415 23:39:06.315495   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:06.315874   15850 main.go:141] libmachine: (addons-045739) Calling .GetIP
	I0415 23:39:06.319332   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:06.319763   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:06.319799   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:06.320014   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:06.320588   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:06.320795   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:06.320945   15850 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 23:39:06.321008   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:06.321111   15850 ssh_runner.go:195] Run: cat /version.json
	I0415 23:39:06.321136   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:06.324170   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:06.324361   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:06.324678   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:06.324705   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:06.324745   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:06.324764   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:06.324845   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:06.325043   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:06.325136   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:06.325275   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:06.325335   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:06.325436   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:06.325510   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:06.325635   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:06.411740   15850 ssh_runner.go:195] Run: systemctl --version
	I0415 23:39:06.450578   15850 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0415 23:39:06.635976   15850 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 23:39:06.645224   15850 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 23:39:06.645343   15850 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 23:39:06.666683   15850 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 23:39:06.666716   15850 start.go:494] detecting cgroup driver to use...
	I0415 23:39:06.666797   15850 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 23:39:06.689604   15850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 23:39:06.707705   15850 docker.go:217] disabling cri-docker service (if available) ...
	I0415 23:39:06.707772   15850 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0415 23:39:06.726065   15850 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0415 23:39:06.744671   15850 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0415 23:39:06.898504   15850 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0415 23:39:07.079018   15850 docker.go:233] disabling docker service ...
	I0415 23:39:07.079097   15850 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0415 23:39:07.100245   15850 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0415 23:39:07.121421   15850 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0415 23:39:07.305644   15850 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0415 23:39:07.470912   15850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0415 23:39:07.493199   15850 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 23:39:07.521382   15850 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0415 23:39:07.521453   15850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:39:07.538236   15850 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0415 23:39:07.538343   15850 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:39:07.554030   15850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:39:07.569901   15850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:39:07.585208   15850 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 23:39:07.600636   15850 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:39:07.615394   15850 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:39:07.642533   15850 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:39:07.659155   15850 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 23:39:07.674397   15850 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0415 23:39:07.674494   15850 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0415 23:39:07.692630   15850 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 23:39:07.707315   15850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:39:07.862901   15850 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0415 23:39:08.046268   15850 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0415 23:39:08.046384   15850 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0415 23:39:08.053530   15850 start.go:562] Will wait 60s for crictl version
	I0415 23:39:08.053669   15850 ssh_runner.go:195] Run: which crictl
	I0415 23:39:08.059951   15850 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 23:39:08.109183   15850 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0415 23:39:08.109374   15850 ssh_runner.go:195] Run: crio --version
	I0415 23:39:08.148484   15850 ssh_runner.go:195] Run: crio --version
	I0415 23:39:08.189258   15850 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0415 23:39:08.191126   15850 main.go:141] libmachine: (addons-045739) Calling .GetIP
	I0415 23:39:08.194816   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:08.195588   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:08.195633   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:08.195978   15850 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0415 23:39:08.202054   15850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 23:39:08.218914   15850 kubeadm.go:877] updating cluster {Name:addons-045739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:addons-045739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 23:39:08.219117   15850 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0415 23:39:08.219206   15850 ssh_runner.go:195] Run: sudo crictl images --output json
	I0415 23:39:08.265391   15850 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0415 23:39:08.265511   15850 ssh_runner.go:195] Run: which lz4
	I0415 23:39:08.271184   15850 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0415 23:39:08.277278   15850 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 23:39:08.277337   15850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0415 23:39:10.265056   15850 crio.go:462] duration metric: took 1.993922751s to copy over tarball
	I0415 23:39:10.265134   15850 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 23:39:13.314950   15850 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.049788143s)
	I0415 23:39:13.314999   15850 crio.go:469] duration metric: took 3.049911321s to extract the tarball
	I0415 23:39:13.315008   15850 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 23:39:13.362928   15850 ssh_runner.go:195] Run: sudo crictl images --output json
	I0415 23:39:13.428886   15850 crio.go:514] all images are preloaded for cri-o runtime.
	I0415 23:39:13.428919   15850 cache_images.go:84] Images are preloaded, skipping loading
	I0415 23:39:13.428929   15850 kubeadm.go:928] updating node { 192.168.39.182 8443 v1.29.3 crio true true} ...
	I0415 23:39:13.429065   15850 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-045739 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.182
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:addons-045739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 23:39:13.429138   15850 ssh_runner.go:195] Run: crio config
	I0415 23:39:13.500094   15850 cni.go:84] Creating CNI manager for ""
	I0415 23:39:13.500158   15850 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0415 23:39:13.500177   15850 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 23:39:13.500225   15850 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.182 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-045739 NodeName:addons-045739 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.182"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.182 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 23:39:13.500459   15850 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.182
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-045739"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.182
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.182"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 23:39:13.500570   15850 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 23:39:13.514718   15850 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 23:39:13.514845   15850 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0415 23:39:13.530168   15850 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0415 23:39:13.553361   15850 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 23:39:13.577519   15850 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0415 23:39:13.602135   15850 ssh_runner.go:195] Run: grep 192.168.39.182	control-plane.minikube.internal$ /etc/hosts
	I0415 23:39:13.607845   15850 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.182	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 23:39:13.624891   15850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:39:13.788094   15850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 23:39:13.817034   15850 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739 for IP: 192.168.39.182
	I0415 23:39:13.817068   15850 certs.go:194] generating shared ca certs ...
	I0415 23:39:13.817091   15850 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:39:13.817322   15850 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0415 23:39:14.026082   15850 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt ...
	I0415 23:39:14.026117   15850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt: {Name:mk443d225596373645b46032bb827c12932c30e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:39:14.026307   15850 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key ...
	I0415 23:39:14.026319   15850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key: {Name:mkd36d2bb50be39c24ecea43a12154acdefbe11d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:39:14.026410   15850 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0415 23:39:14.121635   15850 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt ...
	I0415 23:39:14.121670   15850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt: {Name:mkca70dd1eb3a5dc28d6d6446ee118367cf1e551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:39:14.121842   15850 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key ...
	I0415 23:39:14.121854   15850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key: {Name:mk6681815f735bccd5bf9a968a5322dd66abe46a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:39:14.121923   15850 certs.go:256] generating profile certs ...
	I0415 23:39:14.121992   15850 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.key
	I0415 23:39:14.122009   15850 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt with IP's: []
	I0415 23:39:14.213641   15850 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt ...
	I0415 23:39:14.213677   15850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: {Name:mk0d81ea9c08b23d8482ecc23cbcaf112e1d0ff4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:39:14.213866   15850 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.key ...
	I0415 23:39:14.213879   15850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.key: {Name:mkbf670f787e43cf293a960bc1215a568ed2f6f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:39:14.213958   15850 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/apiserver.key.70d0a6cb
	I0415 23:39:14.213979   15850 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/apiserver.crt.70d0a6cb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.182]
	I0415 23:39:14.365791   15850 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/apiserver.crt.70d0a6cb ...
	I0415 23:39:14.365837   15850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/apiserver.crt.70d0a6cb: {Name:mk3c5cbed76836e5e723a87e5e94d654303b7ded Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:39:14.366033   15850 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/apiserver.key.70d0a6cb ...
	I0415 23:39:14.366053   15850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/apiserver.key.70d0a6cb: {Name:mkaaf603a4b0b11519837623f14e018cd51a8c66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:39:14.366127   15850 certs.go:381] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/apiserver.crt.70d0a6cb -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/apiserver.crt
	I0415 23:39:14.366228   15850 certs.go:385] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/apiserver.key.70d0a6cb -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/apiserver.key
	I0415 23:39:14.366287   15850 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/proxy-client.key
	I0415 23:39:14.366305   15850 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/proxy-client.crt with IP's: []
	I0415 23:39:14.577895   15850 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/proxy-client.crt ...
	I0415 23:39:14.577960   15850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/proxy-client.crt: {Name:mk50840ce5b0b5761d6b28cebfd014a98c0ccaf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:39:14.578185   15850 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/proxy-client.key ...
	I0415 23:39:14.578200   15850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/proxy-client.key: {Name:mk632b040a63542ebe5b0c570773ca493046eeca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:39:14.578421   15850 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0415 23:39:14.578470   15850 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0415 23:39:14.578504   15850 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0415 23:39:14.578540   15850 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0415 23:39:14.579252   15850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 23:39:14.616389   15850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 23:39:14.654722   15850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 23:39:14.692786   15850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 23:39:14.725086   15850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0415 23:39:14.759381   15850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 23:39:14.795882   15850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 23:39:14.829308   15850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0415 23:39:14.864409   15850 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 23:39:14.897303   15850 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 23:39:14.920364   15850 ssh_runner.go:195] Run: openssl version
	I0415 23:39:14.928587   15850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 23:39:14.947797   15850 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:39:14.954566   15850 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:39:14.954639   15850 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:39:14.962414   15850 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 23:39:14.977877   15850 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 23:39:14.984113   15850 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 23:39:14.984206   15850 kubeadm.go:391] StartCluster: {Name:addons-045739 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 C
lusterName:addons-045739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:39:14.984311   15850 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0415 23:39:14.984375   15850 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0415 23:39:15.036585   15850 cri.go:89] found id: ""
	I0415 23:39:15.036721   15850 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 23:39:15.050493   15850 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 23:39:15.066147   15850 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 23:39:15.081586   15850 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 23:39:15.081630   15850 kubeadm.go:156] found existing configuration files:
	
	I0415 23:39:15.081686   15850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0415 23:39:15.095238   15850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 23:39:15.095329   15850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 23:39:15.109212   15850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0415 23:39:15.124060   15850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 23:39:15.124171   15850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 23:39:15.141949   15850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0415 23:39:15.154153   15850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 23:39:15.154241   15850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 23:39:15.167862   15850 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0415 23:39:15.182694   15850 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 23:39:15.182779   15850 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 23:39:15.196835   15850 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0415 23:39:15.262103   15850 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0415 23:39:15.262179   15850 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 23:39:15.426726   15850 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 23:39:15.426853   15850 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 23:39:15.426997   15850 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 23:39:15.723315   15850 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 23:39:15.727373   15850 out.go:204]   - Generating certificates and keys ...
	I0415 23:39:15.727517   15850 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 23:39:15.727630   15850 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 23:39:15.993983   15850 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0415 23:39:16.092583   15850 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0415 23:39:16.336800   15850 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0415 23:39:16.452091   15850 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0415 23:39:16.685709   15850 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0415 23:39:16.685943   15850 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-045739 localhost] and IPs [192.168.39.182 127.0.0.1 ::1]
	I0415 23:39:16.902679   15850 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0415 23:39:16.902913   15850 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-045739 localhost] and IPs [192.168.39.182 127.0.0.1 ::1]
	I0415 23:39:17.117621   15850 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0415 23:39:17.329798   15850 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0415 23:39:17.694199   15850 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0415 23:39:17.694319   15850 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 23:39:18.154826   15850 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 23:39:18.373632   15850 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0415 23:39:18.745098   15850 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 23:39:18.932800   15850 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 23:39:19.031526   15850 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 23:39:19.032143   15850 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 23:39:19.035000   15850 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 23:39:19.037336   15850 out.go:204]   - Booting up control plane ...
	I0415 23:39:19.037485   15850 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 23:39:19.037616   15850 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 23:39:19.037726   15850 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 23:39:19.064246   15850 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 23:39:19.064368   15850 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 23:39:19.064427   15850 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 23:39:19.238184   15850 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 23:39:26.242752   15850 kubeadm.go:309] [apiclient] All control plane components are healthy after 7.004260 seconds
	I0415 23:39:26.257604   15850 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 23:39:26.280724   15850 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 23:39:26.820344   15850 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 23:39:26.820546   15850 kubeadm.go:309] [mark-control-plane] Marking the node addons-045739 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 23:39:27.337606   15850 kubeadm.go:309] [bootstrap-token] Using token: p6946e.pukgd7xpmbn44yji
	I0415 23:39:27.339714   15850 out.go:204]   - Configuring RBAC rules ...
	I0415 23:39:27.339892   15850 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 23:39:27.345718   15850 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 23:39:27.355006   15850 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 23:39:27.363444   15850 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 23:39:27.367676   15850 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 23:39:27.379705   15850 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 23:39:27.405467   15850 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 23:39:27.678742   15850 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 23:39:27.757644   15850 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 23:39:27.758709   15850 kubeadm.go:309] 
	I0415 23:39:27.758773   15850 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 23:39:27.758809   15850 kubeadm.go:309] 
	I0415 23:39:27.758917   15850 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 23:39:27.758928   15850 kubeadm.go:309] 
	I0415 23:39:27.758985   15850 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 23:39:27.759091   15850 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 23:39:27.759174   15850 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 23:39:27.759187   15850 kubeadm.go:309] 
	I0415 23:39:27.759274   15850 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 23:39:27.759287   15850 kubeadm.go:309] 
	I0415 23:39:27.759368   15850 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 23:39:27.759388   15850 kubeadm.go:309] 
	I0415 23:39:27.759455   15850 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 23:39:27.759547   15850 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 23:39:27.759663   15850 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 23:39:27.759683   15850 kubeadm.go:309] 
	I0415 23:39:27.759812   15850 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 23:39:27.759923   15850 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 23:39:27.759951   15850 kubeadm.go:309] 
	I0415 23:39:27.760091   15850 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token p6946e.pukgd7xpmbn44yji \
	I0415 23:39:27.760189   15850 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0415 23:39:27.760217   15850 kubeadm.go:309] 	--control-plane 
	I0415 23:39:27.760223   15850 kubeadm.go:309] 
	I0415 23:39:27.760361   15850 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 23:39:27.760376   15850 kubeadm.go:309] 
	I0415 23:39:27.760489   15850 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token p6946e.pukgd7xpmbn44yji \
	I0415 23:39:27.760642   15850 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0415 23:39:27.761210   15850 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 23:39:27.761300   15850 cni.go:84] Creating CNI manager for ""
	I0415 23:39:27.761320   15850 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0415 23:39:27.763551   15850 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0415 23:39:27.765492   15850 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0415 23:39:27.795560   15850 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0415 23:39:27.882544   15850 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 23:39:27.882666   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:27.882696   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-045739 minikube.k8s.io/updated_at=2024_04_15T23_39_27_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=addons-045739 minikube.k8s.io/primary=true
	I0415 23:39:28.101776   15850 ops.go:34] apiserver oom_adj: -16
	I0415 23:39:28.217913   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:28.718885   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:29.219048   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:29.718662   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:30.219179   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:30.718961   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:31.218045   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:31.718472   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:32.218584   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:32.718807   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:33.218419   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:33.718751   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:34.218657   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:34.718087   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:35.218701   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:35.718256   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:36.218659   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:36.718976   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:37.218157   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:37.718741   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:38.218987   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:38.718380   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:39.218489   15850 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:39:39.368926   15850 kubeadm.go:1107] duration metric: took 11.48637305s to wait for elevateKubeSystemPrivileges
	W0415 23:39:39.368988   15850 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 23:39:39.368997   15850 kubeadm.go:393] duration metric: took 24.384796453s to StartCluster
	I0415 23:39:39.369016   15850 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:39:39.369220   15850 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0415 23:39:39.369705   15850 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:39:39.369958   15850 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.182 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0415 23:39:39.372503   15850 out.go:177] * Verifying Kubernetes components...
	I0415 23:39:39.369976   15850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0415 23:39:39.370024   15850 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0415 23:39:39.370213   15850 config.go:182] Loaded profile config "addons-045739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:39:39.374559   15850 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:39:39.374573   15850 addons.go:69] Setting yakd=true in profile "addons-045739"
	I0415 23:39:39.374561   15850 addons.go:69] Setting cloud-spanner=true in profile "addons-045739"
	I0415 23:39:39.374604   15850 addons.go:234] Setting addon yakd=true in "addons-045739"
	I0415 23:39:39.374614   15850 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-045739"
	I0415 23:39:39.374625   15850 addons.go:69] Setting registry=true in profile "addons-045739"
	I0415 23:39:39.374630   15850 addons.go:234] Setting addon cloud-spanner=true in "addons-045739"
	I0415 23:39:39.374644   15850 host.go:66] Checking if "addons-045739" exists ...
	I0415 23:39:39.374676   15850 addons.go:69] Setting storage-provisioner=true in profile "addons-045739"
	I0415 23:39:39.374710   15850 addons.go:234] Setting addon storage-provisioner=true in "addons-045739"
	I0415 23:39:39.374765   15850 host.go:66] Checking if "addons-045739" exists ...
	I0415 23:39:39.374646   15850 addons.go:234] Setting addon registry=true in "addons-045739"
	I0415 23:39:39.374832   15850 host.go:66] Checking if "addons-045739" exists ...
	I0415 23:39:39.374622   15850 addons.go:69] Setting inspektor-gadget=true in profile "addons-045739"
	I0415 23:39:39.374944   15850 addons.go:234] Setting addon inspektor-gadget=true in "addons-045739"
	I0415 23:39:39.375030   15850 host.go:66] Checking if "addons-045739" exists ...
	I0415 23:39:39.375220   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.375233   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.375242   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.375251   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.375257   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.375285   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.374647   15850 addons.go:69] Setting ingress=true in profile "addons-045739"
	I0415 23:39:39.375428   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.375446   15850 addons.go:234] Setting addon ingress=true in "addons-045739"
	I0415 23:39:39.375484   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.375515   15850 host.go:66] Checking if "addons-045739" exists ...
	I0415 23:39:39.374654   15850 addons.go:69] Setting metrics-server=true in profile "addons-045739"
	I0415 23:39:39.375583   15850 addons.go:234] Setting addon metrics-server=true in "addons-045739"
	I0415 23:39:39.375606   15850 host.go:66] Checking if "addons-045739" exists ...
	I0415 23:39:39.374662   15850 host.go:66] Checking if "addons-045739" exists ...
	I0415 23:39:39.374584   15850 addons.go:69] Setting gcp-auth=true in profile "addons-045739"
	I0415 23:39:39.375873   15850 mustload.go:65] Loading cluster: addons-045739
	I0415 23:39:39.375953   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.375979   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.375981   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.376006   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.376085   15850 config.go:182] Loaded profile config "addons-045739": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:39:39.376247   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.376277   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.376450   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.374666   15850 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-045739"
	I0415 23:39:39.376483   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.376500   15850 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-045739"
	I0415 23:39:39.374665   15850 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-045739"
	I0415 23:39:39.376563   15850 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-045739"
	I0415 23:39:39.376593   15850 host.go:66] Checking if "addons-045739" exists ...
	I0415 23:39:39.376880   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.376900   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.374674   15850 addons.go:69] Setting ingress-dns=true in profile "addons-045739"
	I0415 23:39:39.376978   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.376987   15850 addons.go:234] Setting addon ingress-dns=true in "addons-045739"
	I0415 23:39:39.376999   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.377029   15850 host.go:66] Checking if "addons-045739" exists ...
	I0415 23:39:39.374677   15850 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-045739"
	I0415 23:39:39.377096   15850 host.go:66] Checking if "addons-045739" exists ...
	I0415 23:39:39.374679   15850 addons.go:69] Setting default-storageclass=true in profile "addons-045739"
	I0415 23:39:39.384415   15850 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-045739"
	I0415 23:39:39.374666   15850 addons.go:69] Setting helm-tiller=true in profile "addons-045739"
	I0415 23:39:39.384626   15850 addons.go:234] Setting addon helm-tiller=true in "addons-045739"
	I0415 23:39:39.384677   15850 host.go:66] Checking if "addons-045739" exists ...
	I0415 23:39:39.384952   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.385014   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.385173   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.385212   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.374687   15850 addons.go:69] Setting volumesnapshots=true in profile "addons-045739"
	I0415 23:39:39.385514   15850 addons.go:234] Setting addon volumesnapshots=true in "addons-045739"
	I0415 23:39:39.385571   15850 host.go:66] Checking if "addons-045739" exists ...
	I0415 23:39:39.385973   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.386007   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.402435   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44495
	I0415 23:39:39.403648   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.404470   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.404500   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.404620   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41231
	I0415 23:39:39.405310   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.405653   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.405985   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.406013   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.406488   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.406524   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.409753   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.409805   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.409828   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.409849   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.410134   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.410393   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45223
	I0415 23:39:39.411038   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.411114   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.411275   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.411456   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37915
	I0415 23:39:39.411653   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46393
	I0415 23:39:39.411916   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.411937   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.412019   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.412115   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.412583   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.412604   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.412755   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.412768   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.413085   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.413222   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.413693   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.413714   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.414354   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.414390   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.417756   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.418476   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.418538   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.430703   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45881
	I0415 23:39:39.431285   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.431974   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.432003   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.432395   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.432615   15850 main.go:141] libmachine: (addons-045739) Calling .GetState
	I0415 23:39:39.436800   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45853
	I0415 23:39:39.437428   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.438080   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.438098   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.438770   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.439567   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.439610   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.441266   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39449
	I0415 23:39:39.441475   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33031
	I0415 23:39:39.441639   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:39.441791   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.445497   15850 out.go:177]   - Using image docker.io/registry:2.8.3
	I0415 23:39:39.442081   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.442347   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.447605   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.449426   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42737
	I0415 23:39:39.449456   15850 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0415 23:39:39.449065   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.449111   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39413
	I0415 23:39:39.451329   15850 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0415 23:39:39.451349   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0415 23:39:39.451383   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:39.448383   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.450543   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.451460   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.451752   15850 main.go:141] libmachine: (addons-045739) Calling .GetState
	I0415 23:39:39.452769   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.452968   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.452992   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.453678   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.453730   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.453970   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.454191   15850 main.go:141] libmachine: (addons-045739) Calling .GetState
	I0415 23:39:39.454840   15850 host.go:66] Checking if "addons-045739" exists ...
	I0415 23:39:39.454877   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.455267   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.455330   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.455418   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.455440   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.455901   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.456510   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.456549   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.456965   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.457510   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:39.457542   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.457864   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:39.458085   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:39.458278   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:39.458446   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:39.461013   15850 addons.go:234] Setting addon default-storageclass=true in "addons-045739"
	I0415 23:39:39.461070   15850 host.go:66] Checking if "addons-045739" exists ...
	I0415 23:39:39.461540   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.461577   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.470421   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37951
	I0415 23:39:39.472011   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37287
	I0415 23:39:39.472636   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.473297   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.473318   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.473755   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.474437   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.474479   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.474722   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33515
	I0415 23:39:39.475260   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44661
	I0415 23:39:39.475987   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.476594   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.476613   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.477005   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.477224   15850 main.go:141] libmachine: (addons-045739) Calling .GetState
	I0415 23:39:39.477927   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.477973   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33549
	I0415 23:39:39.477994   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.478372   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.478602   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.478721   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.478605   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.478761   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.479584   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.480019   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.480041   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.480244   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.480305   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.481060   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.481336   15850 main.go:141] libmachine: (addons-045739) Calling .GetState
	I0415 23:39:39.481568   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:39.481627   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.484494   15850 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0415 23:39:39.482249   15850 main.go:141] libmachine: (addons-045739) Calling .GetState
	I0415 23:39:39.483705   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:39.488173   15850 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0415 23:39:39.489849   15850 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-045739"
	I0415 23:39:39.492826   15850 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0415 23:39:39.491216   15850 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.15
	I0415 23:39:39.491264   15850 host.go:66] Checking if "addons-045739" exists ...
	I0415 23:39:39.492019   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46521
	I0415 23:39:39.492273   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
	I0415 23:39:39.494835   15850 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0415 23:39:39.494852   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0415 23:39:39.494878   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:39.497400   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44379
	I0415 23:39:39.497488   15850 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0415 23:39:39.497504   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0415 23:39:39.497525   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:39.495442   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.495691   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42285
	I0415 23:39:39.497602   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.496248   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.496632   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.498375   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.498399   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.498501   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.498658   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.498672   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.498941   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.499131   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.499208   15850 main.go:141] libmachine: (addons-045739) Calling .GetState
	I0415 23:39:39.499279   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.500117   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45843
	I0415 23:39:39.500276   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.500296   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.500276   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.500352   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.500686   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.500735   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.500760   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.500791   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.501099   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:39.501660   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.501715   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.501841   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.501854   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.501921   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.502199   15850 main.go:141] libmachine: (addons-045739) Calling .GetState
	I0415 23:39:39.502267   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:39.502283   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.502313   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:39.502326   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.502537   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.502596   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:39.502689   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:39.502745   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:39.502909   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:39.502960   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:39.503120   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:39.503425   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:39.503931   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:39.504733   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.504765   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.504852   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:39.504936   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:39.507965   15850 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 23:39:39.509925   15850 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0415 23:39:39.511541   15850 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0415 23:39:39.511563   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0415 23:39:39.511595   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:39.509891   15850 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 23:39:39.511689   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 23:39:39.511701   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:39.517771   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.518707   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.518998   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:39.519034   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.519245   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:39.519452   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:39.519714   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:39.519906   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:39.520253   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:39.520272   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.520373   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:39.520583   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:39.520748   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:39.520901   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:39.521658   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39563
	I0415 23:39:39.522668   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.523278   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.523297   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.523747   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.523990   15850 main.go:141] libmachine: (addons-045739) Calling .GetState
	I0415 23:39:39.524399   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34815
	I0415 23:39:39.524907   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.525462   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.525489   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.525900   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.526263   15850 main.go:141] libmachine: (addons-045739) Calling .GetState
	I0415 23:39:39.526937   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:39.530133   15850 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0415 23:39:39.527627   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42413
	I0415 23:39:39.528787   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:39.530288   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41855
	I0415 23:39:39.531671   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35513
	I0415 23:39:39.534618   15850 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0415 23:39:39.533020   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.533258   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.533308   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.533542   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40259
	I0415 23:39:39.538366   15850 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0415 23:39:39.536701   15850 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0415 23:39:39.537347   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.537382   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34343
	I0415 23:39:39.537522   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.537855   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.537870   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.540189   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.542664   15850 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0415 23:39:39.540531   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.540549   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.540978   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.541213   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.541250   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34283
	I0415 23:39:39.541402   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.546746   15850 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0415 23:39:39.544688   15850 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0415 23:39:39.544861   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.544919   15850 main.go:141] libmachine: (addons-045739) Calling .GetState
	I0415 23:39:39.545233   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.545305   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.546039   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.546080   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.548492   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.550492   15850 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0415 23:39:39.548577   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0415 23:39:39.549384   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.549437   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.549444   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.549637   15850 main.go:141] libmachine: (addons-045739) Calling .GetState
	I0415 23:39:39.549877   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.550970   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:39.551842   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36485
	I0415 23:39:39.554120   15850 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0415 23:39:39.552483   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.552505   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.552515   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:39.552673   15850 main.go:141] libmachine: (addons-045739) Calling .GetState
	I0415 23:39:39.552868   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.553055   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:39.554597   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:39.557501   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.557657   15850 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0415 23:39:39.557705   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:39.558435   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.559081   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:39.559347   15850 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0415 23:39:39.559638   15850 main.go:141] libmachine: (addons-045739) Calling .GetState
	I0415 23:39:39.560794   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.561357   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.561375   15850 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0415 23:39:39.561682   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:39.563908   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:39.563980   15850 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0415 23:39:39.564626   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.565455   15850 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0415 23:39:39.565702   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:39.566534   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:39.567249   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.567266   15850 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0415 23:39:39.567298   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0415 23:39:39.569024   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:39.569210   15850 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0415 23:39:39.569230   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0415 23:39:39.569248   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:39.569348   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0415 23:39:39.569369   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:39.571523   15850 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0415 23:39:39.571553   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0415 23:39:39.571574   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:39.573599   15850 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0415 23:39:39.569779   15850 main.go:141] libmachine: (addons-045739) Calling .GetState
	I0415 23:39:39.569810   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:39.574425   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.575142   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.575388   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:39.575712   15850 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0415 23:39:39.575731   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0415 23:39:39.575752   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:39.575874   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:39.575899   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.576132   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:39.576159   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:39.576203   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:39.576219   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.576324   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:39.577234   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.577288   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.577330   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:39.577408   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:39.577724   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:39.577742   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.578006   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:39.578005   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:39.578381   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:39.578417   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.578449   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:39.578497   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:39.578548   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:39.578811   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:39.578809   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:39.578870   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:39.579004   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:39.579133   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:39.579270   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:39.579423   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:39.581846   15850 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0415 23:39:39.579846   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.580893   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:39.583424   15850 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0415 23:39:39.583441   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0415 23:39:39.583455   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:39.583459   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:39.583481   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.584066   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:39.584355   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:39.584519   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:39.586267   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44609
	I0415 23:39:39.586796   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.587063   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.587506   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.587522   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.587592   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:39.587607   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.587809   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:39.587994   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:39.588049   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.588126   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:39.588228   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:39.588725   15850 main.go:141] libmachine: (addons-045739) Calling .GetState
	I0415 23:39:39.590900   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	W0415 23:39:39.591154   15850 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40812->192.168.39.182:22: read: connection reset by peer
	I0415 23:39:39.591188   15850 retry.go:31] will retry after 213.954585ms: ssh: handshake failed: read tcp 192.168.39.1:40812->192.168.39.182:22: read: connection reset by peer
	I0415 23:39:39.593799   15850 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0415 23:39:39.592326   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37183
	I0415 23:39:39.598654   15850 out.go:177]   - Using image docker.io/busybox:stable
	I0415 23:39:39.596890   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:39.601061   15850 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0415 23:39:39.601076   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0415 23:39:39.601104   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:39.601600   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:39.601627   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:39.602104   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:39.602355   15850 main.go:141] libmachine: (addons-045739) Calling .GetState
	I0415 23:39:39.606106   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:39.606543   15850 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 23:39:39.606567   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 23:39:39.606591   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:39.606799   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.607626   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:39.607661   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.607915   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:39.608323   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:39.608716   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:39.609015   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:39.611102   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.611525   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:39.611548   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:39.611793   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:39.612032   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:39.612199   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:39.612406   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:39.750537   15850 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W0415 23:39:39.806928   15850 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40856->192.168.39.182:22: read: connection reset by peer
	I0415 23:39:39.806979   15850 retry.go:31] will retry after 502.878463ms: ssh: handshake failed: read tcp 192.168.39.1:40856->192.168.39.182:22: read: connection reset by peer
	I0415 23:39:40.102433   15850 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0415 23:39:40.102472   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0415 23:39:40.142363   15850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 23:39:40.167350   15850 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0415 23:39:40.167381   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0415 23:39:40.174440   15850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0415 23:39:40.184820   15850 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0415 23:39:40.184845   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0415 23:39:40.203653   15850 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0415 23:39:40.203688   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0415 23:39:40.206537   15850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0415 23:39:40.250051   15850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0415 23:39:40.262697   15850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0415 23:39:40.294235   15850 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0415 23:39:40.294262   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0415 23:39:40.356113   15850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 23:39:40.379868   15850 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0415 23:39:40.379908   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0415 23:39:40.415188   15850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0415 23:39:40.417209   15850 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0415 23:39:40.417238   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0415 23:39:40.421731   15850 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0415 23:39:40.421756   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0415 23:39:40.452563   15850 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0415 23:39:40.452604   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0415 23:39:40.460998   15850 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0415 23:39:40.461037   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0415 23:39:40.525567   15850 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.151077848s)
	I0415 23:39:40.525731   15850 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0415 23:39:40.549957   15850 node_ready.go:35] waiting up to 6m0s for node "addons-045739" to be "Ready" ...
	I0415 23:39:40.557140   15850 node_ready.go:49] node "addons-045739" has status "Ready":"True"
	I0415 23:39:40.557195   15850 node_ready.go:38] duration metric: took 7.199391ms for node "addons-045739" to be "Ready" ...
	I0415 23:39:40.557210   15850 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 23:39:40.571168   15850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4gr4t" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:40.705584   15850 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0415 23:39:40.705621   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0415 23:39:40.712216   15850 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0415 23:39:40.712243   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0415 23:39:40.721247   15850 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0415 23:39:40.721283   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0415 23:39:40.857058   15850 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0415 23:39:40.857089   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0415 23:39:40.891283   15850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0415 23:39:40.910857   15850 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0415 23:39:40.910902   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0415 23:39:41.034984   15850 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0415 23:39:41.035020   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0415 23:39:41.041497   15850 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0415 23:39:41.041526   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0415 23:39:41.095949   15850 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0415 23:39:41.095978   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0415 23:39:41.137110   15850 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0415 23:39:41.137138   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0415 23:39:41.175082   15850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0415 23:39:41.315517   15850 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0415 23:39:41.315552   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0415 23:39:41.452895   15850 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0415 23:39:41.452925   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0415 23:39:41.453883   15850 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0415 23:39:41.453904   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0415 23:39:41.463666   15850 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0415 23:39:41.463693   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0415 23:39:41.500513   15850 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0415 23:39:41.500544   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0415 23:39:41.734983   15850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0415 23:39:41.778028   15850 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0415 23:39:41.778061   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0415 23:39:41.876747   15850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0415 23:39:41.899464   15850 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0415 23:39:41.899508   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0415 23:39:42.007879   15850 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0415 23:39:42.007909   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0415 23:39:42.453217   15850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0415 23:39:42.456315   15850 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0415 23:39:42.456339   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0415 23:39:42.501329   15850 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0415 23:39:42.501355   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0415 23:39:42.579201   15850 pod_ready.go:102] pod "coredns-76f75df574-4gr4t" in "kube-system" namespace has status "Ready":"False"
	I0415 23:39:42.766826   15850 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0415 23:39:42.766858   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0415 23:39:42.813116   15850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0415 23:39:43.180114   15850 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0415 23:39:43.180152   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0415 23:39:43.346811   15850 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0415 23:39:43.346839   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0415 23:39:43.816783   15850 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0415 23:39:43.816819   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0415 23:39:44.235684   15850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0415 23:39:44.625447   15850 pod_ready.go:102] pod "coredns-76f75df574-4gr4t" in "kube-system" namespace has status "Ready":"False"
	I0415 23:39:46.370846   15850 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0415 23:39:46.370892   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:46.375208   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:46.375789   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:46.375821   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:46.376132   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:46.376406   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:46.376613   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:46.376813   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:46.597880   15850 pod_ready.go:92] pod "coredns-76f75df574-4gr4t" in "kube-system" namespace has status "Ready":"True"
	I0415 23:39:46.597910   15850 pod_ready.go:81] duration metric: took 6.026706045s for pod "coredns-76f75df574-4gr4t" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:46.597922   15850 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-kmkgw" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:46.616858   15850 pod_ready.go:92] pod "coredns-76f75df574-kmkgw" in "kube-system" namespace has status "Ready":"True"
	I0415 23:39:46.616886   15850 pod_ready.go:81] duration metric: took 18.95729ms for pod "coredns-76f75df574-kmkgw" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:46.616898   15850 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-045739" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:46.632455   15850 pod_ready.go:92] pod "etcd-addons-045739" in "kube-system" namespace has status "Ready":"True"
	I0415 23:39:46.632482   15850 pod_ready.go:81] duration metric: took 15.577154ms for pod "etcd-addons-045739" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:46.632492   15850 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-045739" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:46.645873   15850 pod_ready.go:92] pod "kube-apiserver-addons-045739" in "kube-system" namespace has status "Ready":"True"
	I0415 23:39:46.645916   15850 pod_ready.go:81] duration metric: took 13.416218ms for pod "kube-apiserver-addons-045739" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:46.645931   15850 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-045739" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:46.673753   15850 pod_ready.go:92] pod "kube-controller-manager-addons-045739" in "kube-system" namespace has status "Ready":"True"
	I0415 23:39:46.673784   15850 pod_ready.go:81] duration metric: took 27.844956ms for pod "kube-controller-manager-addons-045739" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:46.673795   15850 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dvj6w" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:46.764609   15850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.622194628s)
	I0415 23:39:46.764648   15850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.590177014s)
	I0415 23:39:46.764674   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:46.764682   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:46.764686   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:46.764695   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:46.764991   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:46.765181   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:46.765215   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:46.765216   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:46.765231   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:46.765235   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:46.765240   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:46.765245   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:46.765222   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:46.765255   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:46.765641   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:46.765649   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:46.765685   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:46.765706   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:46.765689   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:46.765728   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:47.010861   15850 pod_ready.go:92] pod "kube-proxy-dvj6w" in "kube-system" namespace has status "Ready":"True"
	I0415 23:39:47.010906   15850 pod_ready.go:81] duration metric: took 337.102166ms for pod "kube-proxy-dvj6w" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:47.010922   15850 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-045739" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:47.381253   15850 pod_ready.go:92] pod "kube-scheduler-addons-045739" in "kube-system" namespace has status "Ready":"True"
	I0415 23:39:47.381283   15850 pod_ready.go:81] duration metric: took 370.352383ms for pod "kube-scheduler-addons-045739" in "kube-system" namespace to be "Ready" ...
	I0415 23:39:47.381294   15850 pod_ready.go:38] duration metric: took 6.824070958s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 23:39:47.381316   15850 api_server.go:52] waiting for apiserver process to appear ...
	I0415 23:39:47.381386   15850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 23:39:47.554382   15850 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0415 23:39:48.193967   15850 addons.go:234] Setting addon gcp-auth=true in "addons-045739"
	I0415 23:39:48.194039   15850 host.go:66] Checking if "addons-045739" exists ...
	I0415 23:39:48.194388   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:48.194425   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:48.210847   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38461
	I0415 23:39:48.211432   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:48.212052   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:48.212082   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:48.212458   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:48.213027   15850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:39:48.213054   15850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:39:48.231277   15850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44189
	I0415 23:39:48.231831   15850 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:39:48.232399   15850 main.go:141] libmachine: Using API Version  1
	I0415 23:39:48.232433   15850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:39:48.232799   15850 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:39:48.233033   15850 main.go:141] libmachine: (addons-045739) Calling .GetState
	I0415 23:39:48.234974   15850 main.go:141] libmachine: (addons-045739) Calling .DriverName
	I0415 23:39:48.235264   15850 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0415 23:39:48.235288   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHHostname
	I0415 23:39:48.238634   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:48.239073   15850 main.go:141] libmachine: (addons-045739) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f7:76:ed", ip: ""} in network mk-addons-045739: {Iface:virbr1 ExpiryTime:2024-04-16 00:38:55 +0000 UTC Type:0 Mac:52:54:00:f7:76:ed Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:addons-045739 Clientid:01:52:54:00:f7:76:ed}
	I0415 23:39:48.239123   15850 main.go:141] libmachine: (addons-045739) DBG | domain addons-045739 has defined IP address 192.168.39.182 and MAC address 52:54:00:f7:76:ed in network mk-addons-045739
	I0415 23:39:48.239273   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHPort
	I0415 23:39:48.239521   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHKeyPath
	I0415 23:39:48.239708   15850 main.go:141] libmachine: (addons-045739) Calling .GetSSHUsername
	I0415 23:39:48.239856   15850 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/addons-045739/id_rsa Username:docker}
	I0415 23:39:51.705925   15850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.499348574s)
	I0415 23:39:51.705993   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.706006   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.706052   15850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.455964476s)
	I0415 23:39:51.706105   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.706118   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.706153   15850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.443426465s)
	I0415 23:39:51.706189   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.706201   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.706252   15850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.350109335s)
	I0415 23:39:51.706296   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.706306   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.706326   15850 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (11.180572146s)
	I0415 23:39:51.706296   15850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.291077844s)
	I0415 23:39:51.706363   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.706365   15850 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0415 23:39:51.706372   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.706465   15850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.815129597s)
	I0415 23:39:51.706483   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.706495   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.706659   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.706752   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.706764   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.706774   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.706784   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.706801   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.706816   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.706827   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.706837   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.706859   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.706891   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.706910   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.706914   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.706918   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.706934   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.706936   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.706941   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.706945   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.706955   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.706962   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.707004   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.707020   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.707028   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.707035   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.707080   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.707105   15850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.53198503s)
	I0415 23:39:51.707135   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.707142   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.707149   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.707155   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.707164   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.707170   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.707240   15850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.972210992s)
	I0415 23:39:51.707258   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.707268   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.707407   15850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.830597269s)
	W0415 23:39:51.707436   15850 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0415 23:39:51.707475   15850 retry.go:31] will retry after 129.616559ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0415 23:39:51.707557   15850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.254300976s)
	I0415 23:39:51.707578   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.707587   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.707677   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.707685   15850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.894525698s)
	I0415 23:39:51.707704   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.707709   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.707713   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.707717   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.707726   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.707733   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.707765   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.707788   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.707794   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.707795   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.707809   15850 addons.go:470] Verifying addon ingress=true in "addons-045739"
	I0415 23:39:51.715723   15850 out.go:177] * Verifying ingress addon...
	I0415 23:39:51.708511   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.708540   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.708560   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.708580   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.708598   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.708604   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.708615   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.708625   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.708629   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.708648   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.708650   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.708923   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.708953   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.709094   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.709119   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.710605   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.710651   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.717978   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.717993   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.718001   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.718012   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.718026   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.718000   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.718042   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.718046   15850 addons.go:470] Verifying addon registry=true in "addons-045739"
	I0415 23:39:51.718029   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.718128   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.718170   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.720389   15850 out.go:177] * Verifying registry addon...
	I0415 23:39:51.718178   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.718190   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.717978   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.718526   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.718558   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.718577   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.718624   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.719754   15850 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0415 23:39:51.724464   15850 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-045739 service yakd-dashboard -n yakd-dashboard
	
	I0415 23:39:51.722581   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.722622   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.722739   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.723783   15850 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0415 23:39:51.726312   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.726850   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.726927   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.726948   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:51.726960   15850 addons.go:470] Verifying addon metrics-server=true in "addons-045739"
	I0415 23:39:51.779285   15850 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0415 23:39:51.779313   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:51.787471   15850 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0415 23:39:51.787496   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:51.813861   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.813884   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.814288   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.814339   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.814348   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	W0415 23:39:51.814467   15850 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0415 23:39:51.838341   15850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0415 23:39:51.854754   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:51.854782   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:51.855140   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:51.855203   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:51.855213   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:52.212088   15850 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-045739" context rescaled to 1 replicas
	I0415 23:39:52.228017   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:52.232480   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:52.760503   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:52.765918   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:53.267870   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:53.298416   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:53.745628   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:53.749389   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:54.159066   15850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.923321906s)
	I0415 23:39:54.159107   15850 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (6.777698129s)
	I0415 23:39:54.159127   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:54.159154   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:54.159163   15850 api_server.go:72] duration metric: took 14.789166697s to wait for apiserver process to appear ...
	I0415 23:39:54.159176   15850 api_server.go:88] waiting for apiserver healthz status ...
	I0415 23:39:54.159185   15850 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.923896573s)
	I0415 23:39:54.159203   15850 api_server.go:253] Checking apiserver healthz at https://192.168.39.182:8443/healthz ...
	I0415 23:39:54.161728   15850 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0415 23:39:54.159526   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:54.159578   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:54.163815   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:54.163841   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:54.165565   15850 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0415 23:39:54.163853   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:54.167637   15850 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0415 23:39:54.166155   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:54.166194   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:54.167686   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0415 23:39:54.167718   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:54.167748   15850 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-045739"
	I0415 23:39:54.170017   15850 out.go:177] * Verifying csi-hostpath-driver addon...
	I0415 23:39:54.173344   15850 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0415 23:39:54.191343   15850 api_server.go:279] https://192.168.39.182:8443/healthz returned 200:
	ok
	I0415 23:39:54.200792   15850 api_server.go:141] control plane version: v1.29.3
	I0415 23:39:54.200840   15850 api_server.go:131] duration metric: took 41.650197ms to wait for apiserver health ...
	I0415 23:39:54.200853   15850 system_pods.go:43] waiting for kube-system pods to appear ...
	I0415 23:39:54.226655   15850 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0415 23:39:54.226691   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:54.283013   15850 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0415 23:39:54.283044   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0415 23:39:54.311283   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:54.311532   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:54.317885   15850 system_pods.go:59] 19 kube-system pods found
	I0415 23:39:54.317956   15850 system_pods.go:61] "coredns-76f75df574-4gr4t" [f77fd4f4-3fd3-4b4b-9faa-52eae1857106] Running
	I0415 23:39:54.317968   15850 system_pods.go:61] "coredns-76f75df574-kmkgw" [10f6f73d-0b09-4f3a-a8e2-d3493fd6b09c] Running
	I0415 23:39:54.317981   15850 system_pods.go:61] "csi-hostpath-attacher-0" [89de6ab0-9b35-4208-8f14-e815934923be] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0415 23:39:54.317996   15850 system_pods.go:61] "csi-hostpath-resizer-0" [2210d05a-4960-4e6c-87e5-535a38e8cce7] Pending
	I0415 23:39:54.318004   15850 system_pods.go:61] "csi-hostpathplugin-f5qxq" [da2f75e3-b62a-4b19-b85f-969bb4c22f78] Pending
	I0415 23:39:54.318010   15850 system_pods.go:61] "etcd-addons-045739" [43caf191-59a2-4f10-a7bf-02c2cfa39494] Running
	I0415 23:39:54.318021   15850 system_pods.go:61] "kube-apiserver-addons-045739" [9b7e4e8a-b33c-471a-bbd8-6342aab99f92] Running
	I0415 23:39:54.318034   15850 system_pods.go:61] "kube-controller-manager-addons-045739" [a31dea14-f7c6-4967-86d8-45b1aa52c174] Running
	I0415 23:39:54.318050   15850 system_pods.go:61] "kube-ingress-dns-minikube" [926cdd76-259c-482c-ae40-0b70c040a88d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0415 23:39:54.318063   15850 system_pods.go:61] "kube-proxy-dvj6w" [72942788-0171-4c87-ae0e-f4186897c5ed] Running
	I0415 23:39:54.318074   15850 system_pods.go:61] "kube-scheduler-addons-045739" [ce4ff786-ba47-41b6-8175-5ad83be382d4] Running
	I0415 23:39:54.318087   15850 system_pods.go:61] "metrics-server-75d6c48ddd-2sm8z" [3d9ee9ce-539d-4a71-bcb5-c51e28fbd314] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0415 23:39:54.318104   15850 system_pods.go:61] "nvidia-device-plugin-daemonset-742pq" [bfe2588d-264a-493b-a8ec-b82e9c1a873d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0415 23:39:54.318116   15850 system_pods.go:61] "registry-gjtmh" [c0b8d4f6-9fd8-4bc0-b4b8-bc1142309612] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0415 23:39:54.318127   15850 system_pods.go:61] "registry-proxy-vlqgc" [10079756-cd80-4336-8ed3-d4418b38b5de] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0415 23:39:54.318142   15850 system_pods.go:61] "snapshot-controller-58dbcc7b99-f7xxf" [92dfe4ee-dc3a-467d-83c1-852dddabdf53] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 23:39:54.318160   15850 system_pods.go:61] "snapshot-controller-58dbcc7b99-g5bf5" [6e7bbe3e-8e52-4b5a-aa90-caca8356db75] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 23:39:54.318171   15850 system_pods.go:61] "storage-provisioner" [added6e4-2d08-4e57-848b-f1480badde64] Running
	I0415 23:39:54.318184   15850 system_pods.go:61] "tiller-deploy-7b677967b9-g5gcg" [cd362247-164a-4db8-b30a-9c2113f148f2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0415 23:39:54.318199   15850 system_pods.go:74] duration metric: took 117.336144ms to wait for pod list to return data ...
	I0415 23:39:54.318216   15850 default_sa.go:34] waiting for default service account to be created ...
	I0415 23:39:54.322671   15850 default_sa.go:45] found service account: "default"
	I0415 23:39:54.322711   15850 default_sa.go:55] duration metric: took 4.485987ms for default service account to be created ...
	I0415 23:39:54.322726   15850 system_pods.go:116] waiting for k8s-apps to be running ...
	I0415 23:39:54.359323   15850 system_pods.go:86] 19 kube-system pods found
	I0415 23:39:54.359377   15850 system_pods.go:89] "coredns-76f75df574-4gr4t" [f77fd4f4-3fd3-4b4b-9faa-52eae1857106] Running
	I0415 23:39:54.359388   15850 system_pods.go:89] "coredns-76f75df574-kmkgw" [10f6f73d-0b09-4f3a-a8e2-d3493fd6b09c] Running
	I0415 23:39:54.359402   15850 system_pods.go:89] "csi-hostpath-attacher-0" [89de6ab0-9b35-4208-8f14-e815934923be] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0415 23:39:54.359414   15850 system_pods.go:89] "csi-hostpath-resizer-0" [2210d05a-4960-4e6c-87e5-535a38e8cce7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0415 23:39:54.359440   15850 system_pods.go:89] "csi-hostpathplugin-f5qxq" [da2f75e3-b62a-4b19-b85f-969bb4c22f78] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0415 23:39:54.359454   15850 system_pods.go:89] "etcd-addons-045739" [43caf191-59a2-4f10-a7bf-02c2cfa39494] Running
	I0415 23:39:54.359462   15850 system_pods.go:89] "kube-apiserver-addons-045739" [9b7e4e8a-b33c-471a-bbd8-6342aab99f92] Running
	I0415 23:39:54.359479   15850 system_pods.go:89] "kube-controller-manager-addons-045739" [a31dea14-f7c6-4967-86d8-45b1aa52c174] Running
	I0415 23:39:54.359490   15850 system_pods.go:89] "kube-ingress-dns-minikube" [926cdd76-259c-482c-ae40-0b70c040a88d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0415 23:39:54.359502   15850 system_pods.go:89] "kube-proxy-dvj6w" [72942788-0171-4c87-ae0e-f4186897c5ed] Running
	I0415 23:39:54.359510   15850 system_pods.go:89] "kube-scheduler-addons-045739" [ce4ff786-ba47-41b6-8175-5ad83be382d4] Running
	I0415 23:39:54.359522   15850 system_pods.go:89] "metrics-server-75d6c48ddd-2sm8z" [3d9ee9ce-539d-4a71-bcb5-c51e28fbd314] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0415 23:39:54.359539   15850 system_pods.go:89] "nvidia-device-plugin-daemonset-742pq" [bfe2588d-264a-493b-a8ec-b82e9c1a873d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0415 23:39:54.359555   15850 system_pods.go:89] "registry-gjtmh" [c0b8d4f6-9fd8-4bc0-b4b8-bc1142309612] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0415 23:39:54.359572   15850 system_pods.go:89] "registry-proxy-vlqgc" [10079756-cd80-4336-8ed3-d4418b38b5de] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0415 23:39:54.359587   15850 system_pods.go:89] "snapshot-controller-58dbcc7b99-f7xxf" [92dfe4ee-dc3a-467d-83c1-852dddabdf53] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 23:39:54.359604   15850 system_pods.go:89] "snapshot-controller-58dbcc7b99-g5bf5" [6e7bbe3e-8e52-4b5a-aa90-caca8356db75] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0415 23:39:54.359614   15850 system_pods.go:89] "storage-provisioner" [added6e4-2d08-4e57-848b-f1480badde64] Running
	I0415 23:39:54.359629   15850 system_pods.go:89] "tiller-deploy-7b677967b9-g5gcg" [cd362247-164a-4db8-b30a-9c2113f148f2] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0415 23:39:54.359644   15850 system_pods.go:126] duration metric: took 36.908802ms to wait for k8s-apps to be running ...
	I0415 23:39:54.359659   15850 system_svc.go:44] waiting for kubelet service to be running ....
	I0415 23:39:54.359738   15850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 23:39:54.487104   15850 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0415 23:39:54.487135   15850 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0415 23:39:54.585836   15850 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0415 23:39:54.647339   15850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.808933238s)
	I0415 23:39:54.647338   15850 system_svc.go:56] duration metric: took 287.666308ms WaitForService to wait for kubelet
	I0415 23:39:54.647450   15850 kubeadm.go:576] duration metric: took 15.277442686s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 23:39:54.647522   15850 node_conditions.go:102] verifying NodePressure condition ...
	I0415 23:39:54.647413   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:54.647610   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:54.648032   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:54.648104   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:54.648127   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:54.648144   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:54.648157   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:54.648585   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:54.648608   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:54.652545   15850 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 23:39:54.652576   15850 node_conditions.go:123] node cpu capacity is 2
	I0415 23:39:54.652591   15850 node_conditions.go:105] duration metric: took 5.060216ms to run NodePressure ...
	I0415 23:39:54.652604   15850 start.go:240] waiting for startup goroutines ...
	I0415 23:39:54.686336   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:54.729063   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:54.745288   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:55.183628   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:55.229922   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:55.235195   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:55.682241   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:55.738532   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:55.769355   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:56.190691   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:56.232697   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:56.236964   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:56.564512   15850 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.978624173s)
	I0415 23:39:56.564622   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:56.564638   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:56.565021   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:56.565057   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:56.565059   15850 main.go:141] libmachine: (addons-045739) DBG | Closing plugin on server side
	I0415 23:39:56.565079   15850 main.go:141] libmachine: Making call to close driver server
	I0415 23:39:56.565097   15850 main.go:141] libmachine: (addons-045739) Calling .Close
	I0415 23:39:56.565499   15850 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:39:56.565534   15850 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:39:56.567184   15850 addons.go:470] Verifying addon gcp-auth=true in "addons-045739"
	I0415 23:39:56.569644   15850 out.go:177] * Verifying gcp-auth addon...
	I0415 23:39:56.572330   15850 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0415 23:39:56.610969   15850 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0415 23:39:56.611005   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:56.696044   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:56.729713   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:56.744094   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:57.077247   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:57.180585   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:57.312845   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:57.313999   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:57.577087   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:57.683300   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:57.728428   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:57.736486   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:58.077246   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:58.182497   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:58.227140   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:58.230995   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:58.576816   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:58.679845   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:58.728647   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:58.733384   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:59.077224   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:59.180049   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:59.227724   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:59.231335   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:39:59.577756   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:39:59.679357   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:39:59.728669   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:39:59.734138   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:00.076904   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:00.180725   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:00.228073   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:00.232111   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:00.578271   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:00.683405   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:00.728609   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:00.733462   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:01.299216   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:01.300088   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:01.300318   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:01.300526   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:01.577790   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:01.684289   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:01.728128   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:01.731553   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:02.076451   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:02.180559   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:02.227693   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:02.231247   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:02.577436   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:02.681516   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:02.731250   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:02.733669   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:03.076330   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:03.182387   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:03.228282   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:03.230883   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:03.576436   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:03.679628   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:03.727591   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:03.730419   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:04.077176   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:04.184268   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:04.228166   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:04.231548   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:04.577842   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:04.679922   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:04.729589   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:04.731621   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:05.075987   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:05.180480   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:05.226677   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:05.231168   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:05.576558   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:05.680617   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:05.727232   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:05.730280   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:06.077296   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:06.179972   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:06.227833   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:06.230647   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:06.576715   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:06.678777   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:06.727356   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:06.730712   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:07.076929   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:07.181620   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:07.228455   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:07.231796   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:07.578346   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:07.681703   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:07.729300   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:07.733311   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:08.079055   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:08.183458   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:08.227665   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:08.231042   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:08.577036   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:08.680572   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:08.727916   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:08.731593   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:09.076588   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:09.182077   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:09.229676   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:09.236188   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:09.578208   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:09.680773   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:09.728903   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:09.737902   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:10.077392   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:10.180225   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:10.228531   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:10.231784   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:10.576590   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:10.680641   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:10.728167   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:10.736146   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:11.077438   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:11.180897   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:11.230504   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:11.239467   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:11.579877   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:11.680998   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:11.729037   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:11.737256   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:12.078159   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:12.180896   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:12.228220   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:12.231865   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:12.576813   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:12.679822   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:12.728351   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:12.731702   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:13.076928   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:13.181296   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:13.229579   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:13.238922   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:13.580739   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:13.681620   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:13.730602   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:13.733687   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:14.077014   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:14.181437   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:14.228562   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:14.233650   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:14.578035   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:14.681455   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:14.728517   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:14.735889   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:15.079286   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:15.180299   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:15.228700   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:15.232121   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:15.578871   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:15.680762   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:15.728335   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:15.737045   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:16.080703   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:16.181607   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:16.228660   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:16.231389   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:16.577146   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:16.680015   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:16.730893   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:16.732147   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:17.077635   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:17.180710   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:17.228305   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:17.231664   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:17.576947   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:17.680296   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:17.728551   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:17.731613   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:18.076373   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:18.180436   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:18.229115   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:18.233191   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:18.579483   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:18.869285   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:18.871450   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:18.873050   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:19.076613   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:19.187033   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:19.229540   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:19.236190   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:19.577791   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:19.680231   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:19.728307   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:19.732214   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:20.077690   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:20.182311   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:20.228838   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:20.233328   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:20.577790   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:20.680485   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:20.736680   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:20.736904   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:21.084255   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:21.180728   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:21.228254   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:21.234448   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:21.578277   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:21.686721   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:21.730579   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:21.737517   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:22.077697   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:22.180551   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:22.234424   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:22.236186   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:22.577659   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:22.683282   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:22.739545   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:22.744414   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:23.077479   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:23.181241   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:23.228452   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:23.231527   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:23.576743   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:23.680801   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:23.728723   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:23.734218   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:24.078109   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:24.183024   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:24.228782   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:24.234347   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:24.577318   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:24.680216   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:24.728241   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:24.731603   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:25.396651   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:25.398220   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:25.400028   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:25.405581   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:25.577630   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:25.681410   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:25.728501   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:25.736089   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:26.077278   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:26.183788   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:26.228037   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:26.231097   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:26.577713   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:26.681086   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:26.732443   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:26.740224   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:27.077430   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:27.181461   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:27.232056   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:27.235128   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:27.577200   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:27.680278   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:27.731035   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:27.747014   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:28.078170   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:28.181244   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:28.230435   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:28.232261   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:28.577432   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:28.681187   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:28.728234   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:28.731238   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:29.078239   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:29.179699   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:29.227447   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:29.235824   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:29.577206   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:29.680518   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:29.727429   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:29.746740   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:30.076381   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:30.197417   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:30.254173   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:30.254392   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:30.577465   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:30.680481   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:30.731798   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:30.732769   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:31.076639   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:31.182841   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:31.228751   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:31.231553   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:31.582630   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:31.682648   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:31.749225   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:31.760793   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:32.081193   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:32.180736   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:32.228137   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:32.232166   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:32.578319   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:32.680208   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:32.729886   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:32.736004   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:33.076514   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:33.181838   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:33.228183   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:33.232642   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:33.751190   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:33.751498   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:33.751578   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:33.752559   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:34.076482   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:34.183218   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:34.229918   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:34.233999   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:34.577401   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:34.682267   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:34.729137   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:34.732213   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:35.077029   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:35.180492   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:35.229328   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:35.233008   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:35.576603   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:35.685798   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:35.728349   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:35.731835   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:36.076825   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:36.181788   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:36.227936   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:36.230835   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:36.576493   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:36.680423   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:36.727983   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:36.732029   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:37.079925   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:37.181831   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:37.229474   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:37.231682   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:37.576711   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:37.681124   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:37.727960   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:37.732138   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:38.077451   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:38.180161   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:38.228385   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:38.231665   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:38.577502   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:38.680425   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:38.728698   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:38.733229   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:39.077680   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:39.182825   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:39.229637   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:39.232210   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:39.577353   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:39.684077   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:39.728778   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:39.732937   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:40.078206   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:40.183944   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:40.228078   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:40.233838   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:40.577351   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:40.681464   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:40.728265   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:40.731569   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:41.088199   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:41.188822   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:41.232866   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:41.236454   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:41.577143   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:41.680967   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:41.728439   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:41.731831   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:42.344125   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:42.346103   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:42.347755   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:42.349957   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:42.577981   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:42.680818   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:42.727887   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:42.732403   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:43.077034   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:43.180021   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:43.228347   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:43.234734   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:43.577886   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:43.680688   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:43.728548   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:43.734423   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:44.077666   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:44.179699   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:44.228978   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:44.233838   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:44.579090   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:44.682221   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:44.731404   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:44.736463   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:45.076830   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:45.180074   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:45.228144   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:45.238559   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:45.576875   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:45.684549   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:45.729005   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:45.733235   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:46.077233   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:46.186626   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:46.228320   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:46.236141   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:46.577921   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:46.680251   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:46.728708   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:46.735593   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:47.077508   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:47.181411   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:47.228408   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:47.232294   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:47.577419   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:47.680136   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:47.731657   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:47.733773   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0415 23:40:48.077047   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:48.180668   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:48.229042   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:48.239049   15850 kapi.go:107] duration metric: took 56.515259452s to wait for kubernetes.io/minikube-addons=registry ...
	I0415 23:40:48.576850   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:48.680598   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:48.728739   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:49.079996   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:49.180891   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:49.228888   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:49.577264   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:49.682763   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:49.733352   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:50.077446   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:50.181005   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:50.229043   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:50.576603   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:50.681273   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:50.728653   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:51.077148   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:51.182208   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:51.227867   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:51.580517   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:51.681311   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:51.727832   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:52.451255   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:52.451689   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:52.451836   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:52.577600   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:52.680498   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:52.730517   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:53.077277   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:53.179983   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:53.228008   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:53.576711   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:53.679450   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:53.744645   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:54.077553   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:54.181129   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:54.227582   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:54.577687   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:54.681017   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:54.731072   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:55.077105   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:55.181008   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:55.228486   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:55.577717   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:55.681862   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:55.727659   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:56.078852   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:56.182113   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:56.228579   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:56.590389   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:56.685226   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:56.728390   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:57.086068   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:57.180460   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:57.228598   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:57.577127   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:57.679993   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:57.729430   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:58.080463   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:58.184056   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:58.227644   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:58.860048   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:58.861121   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:58.863432   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:59.076988   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:59.179455   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:59.241102   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:40:59.577938   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:40:59.680245   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:40:59.728747   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:00.075816   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:00.184342   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:00.229368   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:00.576677   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:00.680075   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:00.728058   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:01.080545   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:01.181050   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:01.227971   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:01.578711   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:01.680468   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:01.728738   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:02.077301   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:02.183938   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:02.227141   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:02.577154   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:02.688127   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:02.729114   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:03.076508   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:03.183599   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:03.228249   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:03.577403   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:03.680380   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:03.727811   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:04.080279   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:04.180827   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:04.228254   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:04.577430   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:04.682713   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:04.727882   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:05.078789   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:05.183558   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:05.236080   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:05.576445   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:05.680600   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:05.728572   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:06.081990   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:06.184458   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:06.236484   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:06.577702   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:06.684554   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:06.730796   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:07.077705   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:07.184324   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:07.228660   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:07.577847   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:07.685074   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:07.730022   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:08.081604   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:08.182453   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:08.229784   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:08.576996   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:08.679911   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:08.728928   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:09.101842   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:09.190276   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:09.230502   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:09.576908   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:09.680739   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:09.729407   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:10.077541   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:10.179972   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:10.227681   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:10.577532   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:10.683547   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:10.728266   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:11.077920   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:11.186097   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:11.231595   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:11.578680   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:11.679946   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:11.728755   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:12.077422   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:12.181833   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:12.228157   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:12.581756   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:12.680862   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:12.729502   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:13.077247   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:13.180972   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:13.228306   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:13.580501   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:13.681986   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:13.735849   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:14.077963   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:14.181624   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:14.228251   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:14.581249   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:14.680906   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:14.728239   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:15.077004   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:15.183825   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:15.229255   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:15.578094   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:15.680404   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:15.729767   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:16.077525   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:16.189589   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:16.228994   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:16.577234   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:16.681235   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0415 23:41:16.737631   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:17.077582   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:17.194211   15850 kapi.go:107] duration metric: took 1m23.020868507s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0415 23:41:17.235558   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:17.576882   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:17.727910   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:18.077203   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:18.228069   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:18.577640   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:18.728542   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:19.077676   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:19.229356   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:19.577869   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:19.728527   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:20.077741   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:20.229313   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:20.580135   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:20.729203   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:21.079050   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:21.230250   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:21.577429   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:21.729264   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:22.076819   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:22.228413   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:22.578190   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:22.728737   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:23.078754   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:23.231519   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:23.577168   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:23.728933   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:24.077386   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:24.228266   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:24.577497   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:24.730313   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:25.077073   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:25.228703   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:25.577922   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:25.730700   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:26.078046   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:26.228595   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:26.579462   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:26.729040   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:27.077126   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:27.228447   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:27.577970   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:27.727881   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:28.077268   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:28.228860   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:28.577277   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:28.728991   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:29.077907   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:29.228093   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:29.579569   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:29.729124   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:30.077521   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:30.228970   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:30.576575   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:30.728552   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:31.081821   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:31.229737   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:31.577753   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:31.729061   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:32.078358   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:32.228516   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:32.578153   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:32.730612   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:33.079011   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:33.228360   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:33.577987   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:33.729272   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:34.078314   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:34.229047   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:34.578700   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:34.729212   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:35.078558   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:35.229486   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:35.577999   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:35.729206   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:36.077700   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:36.229631   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:36.577894   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:36.728204   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:37.076938   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:37.228347   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:37.577815   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:37.730776   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:38.077949   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:38.230042   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:38.578285   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:38.729347   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:39.083398   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:39.228989   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:39.577991   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:39.729023   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:40.076700   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:40.228805   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:40.577144   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:40.728650   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:41.077466   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:41.228997   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:41.577229   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:41.728500   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:42.078789   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:42.227914   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:42.577508   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:42.728449   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:43.077546   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:43.229035   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:43.576899   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:43.728715   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:44.078011   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:44.229785   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:44.578198   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:44.729828   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:45.077998   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:45.228364   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:45.577509   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:45.730143   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:46.076665   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:46.228946   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:46.577895   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:46.728044   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:47.077475   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:47.230002   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:47.577405   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:47.728710   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:48.077083   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:48.227906   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:48.576736   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:48.729715   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:49.079433   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:49.228976   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:49.577521   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:49.729300   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:50.078465   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:50.228997   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:50.577851   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:50.729501   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:51.078863   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:51.228056   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:51.577173   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:51.728530   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:52.082674   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:52.229118   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:52.579080   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:52.729022   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:53.077316   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:53.229047   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:53.577295   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:53.730058   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:54.076859   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:54.228602   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:54.577381   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:54.728846   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:55.078512   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:55.229153   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:55.577500   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:55.730841   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:56.079172   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:56.228542   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:56.577790   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:56.728283   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:57.077124   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:57.228076   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:57.576622   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:57.729500   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:58.077359   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:58.229579   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:58.577077   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:58.728596   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:59.077937   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:59.228429   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:41:59.577215   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:41:59.729192   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:00.076448   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:00.228659   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:00.577996   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:00.728300   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:01.076561   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:01.229064   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:01.577032   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:01.728570   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:02.077386   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:02.228256   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:02.577483   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:02.729618   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:03.077715   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:03.230838   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:03.578422   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:03.728538   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:04.077261   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:04.227950   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:04.578096   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:04.728692   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:05.078057   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:05.228546   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:05.577795   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:05.737115   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:06.076451   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:06.228421   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:06.576807   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:06.733953   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:07.077310   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:07.228714   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:07.577619   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:07.729836   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:08.077557   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:08.229571   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:08.577253   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:08.729050   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:09.080405   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:09.230994   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:09.578070   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:09.729018   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:10.078840   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:10.228274   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:10.576988   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:10.731053   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:11.636558   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:11.647758   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:11.659627   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:11.728908   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:12.079551   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:12.229133   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:12.576380   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:12.728156   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:13.077275   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:13.228833   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:13.578394   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:13.728503   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:14.078510   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:14.229111   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:14.581269   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:14.733496   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:15.077492   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:15.228212   15850 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0415 23:42:15.589508   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:15.731269   15850 kapi.go:107] duration metric: took 2m24.011515532s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0415 23:42:16.078667   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:16.577337   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:17.076922   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:17.578085   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:18.222988   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:18.577145   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:19.078867   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:19.577546   15850 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0415 23:42:20.077126   15850 kapi.go:107] duration metric: took 2m23.504795331s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0415 23:42:20.079643   15850 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-045739 cluster.
	I0415 23:42:20.081611   15850 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0415 23:42:20.083538   15850 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0415 23:42:20.085568   15850 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, nvidia-device-plugin, ingress-dns, helm-tiller, yakd, inspektor-gadget, metrics-server, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0415 23:42:20.087301   15850 addons.go:505] duration metric: took 2m40.717291542s for enable addons: enabled=[storage-provisioner cloud-spanner nvidia-device-plugin ingress-dns helm-tiller yakd inspektor-gadget metrics-server default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0415 23:42:20.087359   15850 start.go:245] waiting for cluster config update ...
	I0415 23:42:20.087382   15850 start.go:254] writing updated cluster config ...
	I0415 23:42:20.087698   15850 ssh_runner.go:195] Run: rm -f paused
	I0415 23:42:20.153854   15850 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0415 23:42:20.156338   15850 out.go:177] * Done! kubectl is now configured to use "addons-045739" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 15 23:45:20 addons-045739 crio[687]: time="2024-04-15 23:45:20.956944125Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713224720956910043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573324,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b4fcde0-e0c5-4ba5-84b2-0df2d1e7dc58 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 15 23:45:20 addons-045739 crio[687]: time="2024-04-15 23:45:20.957822805Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22facc95-6450-4063-9453-c43712f20c9d name=/runtime.v1.RuntimeService/ListContainers
	Apr 15 23:45:20 addons-045739 crio[687]: time="2024-04-15 23:45:20.957883715Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22facc95-6450-4063-9453-c43712f20c9d name=/runtime.v1.RuntimeService/ListContainers
	Apr 15 23:45:20 addons-045739 crio[687]: time="2024-04-15 23:45:20.958477982Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c62e7ff1bba821ae80c5ea592a128fa7e923484866492f17ab0868c8f6066131,PodSandboxId:a6dc54e57390721824a50065f54530f9b8dc968ce212b9a263de3acda267496f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713224713931118163,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-489bl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3eb75e4b-11d4-4971-9180-8313eb46fde9,},Annotations:map[string]string{io.kubernetes.container.hash: e5fbd2e4,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:766d95e31a4c20e887ea0507498a14648dfa9763f4534917566cfff9c606549c,PodSandboxId:2c591744abdf098718ca96598758acc29b7488b36e69795383caf205cb94528b,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713224579138291912,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-4bkgx,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: b9e0936d-5b67-4653-8948-3379b22d134c,},Annota
tions:map[string]string{io.kubernetes.container.hash: d0e6c9f4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8835414324bf6cd452ae95072b9528fe297b58c24b27a0db2115453467a363e4,PodSandboxId:44af6e8c65e80108ce3e123ec8829ffaffa67a5ed874da08b95e409545b7d747,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1713224570976122804,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 690e4a21-f628-4663-bbca-1e4a84f05ea5,},Annotations:map[string]string{io.kubernetes.container.hash: 4d50081b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2084061f3722f12eafc2909dba24108fc4379ef501908480487626286793672d,PodSandboxId:7cdad8480aa1876c121c891bf80df9042117895a363ff2ec0e555d7db1061c10,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713224538751008699,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-88fgb,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 95cc5e0c-643d-4f77-a53f-2ce7f755c4c4,},Annotations:map[string]string{io.kubernetes.container.hash: 5c0d1ab6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdeeb1e6000111bd230f0202d5505392730393eb915c55892a84369632f4425a,PodSandboxId:568e9760aeab5a37c9dffbd92e70be5cabc98088d40faddde6d2b156b58db199,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTA
INER_EXITED,CreatedAt:1713224459081374856,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4br4k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 12575f4a-7d1a-4076-894e-e4bf2900bf1a,},Annotations:map[string]string{io.kubernetes.container.hash: 52f1a113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b5a4c6a6fbf1fa918af56ebd2babf5de397f21e0e1169346b81f4a13ea293df,PodSandboxId:f7c52f8086e8a13811beba47e8aa55e09dd81006cc0622416deff73804703fd4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c
4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713224458967120456,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h9cfs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55e0d629-9343-4a80-99bc-b7027e1bbb7e,},Annotations:map[string]string{io.kubernetes.container.hash: 521e62f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5609b754a8b749500724af8e27782a9c309915df27ca8012c41c5fb1461794,PodSandboxId:a4e4f65b4e9bcdae5eda88016d3eade4cbadab343902efa6134888f0d888417d,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1713224454197342784,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-n6g9b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 1b344559-5993-46da-9d10-2d43f53cb585,},Annotations:map[string]string{io.kubernetes.container.hash: bf316bc2,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe94a642e91e437aaf7c31ee9f2323952498bd76d75882aa18162fe2c91ca85f,PodSandboxId:fd73ded136195d8c651ab2fde0d2aff7a9df32cb9b61f8baff0014459e76a991,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd9
6de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1713224437026811306,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-8pd96,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 50033a63-70d6-44c7-960f-b549de54a73d,},Annotations:map[string]string{io.kubernetes.container.hash: e92150f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87db3f3be812d12b1f2e74f9c47fc25d3518ebf3b5bf89ee4b0648e42063d5d8,PodSandboxId:9e10dafdf0679d61216198706ab601930530927852e129c170d35fcd86e5bb8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617
342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713224388764687055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: added6e4-2d08-4e57-848b-f1480badde64,},Annotations:map[string]string{io.kubernetes.container.hash: 2bb63857,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91342c5062078d47b75ae9bb9bbdefb905237eea8ff16729918d98a7461121c6,PodSandboxId:6228916ef8fac9d271de52b84190bea1d8efd1e83dd5dd6ad1bd2953418f0a35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0
0797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713224384545495575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4gr4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f77fd4f4-3fd3-4b4b-9faa-52eae1857106,},Annotations:map[string]string{io.kubernetes.container.hash: 451354e0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b64e265e85e310e7d386add578fec4e7a268fd54bd6d900720863aa95596f99,PodSa
ndboxId:290e14f4786b1e5aa2b8cf55073320136d4ce85b5fe03adfdb93dd7a88d0cd47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713224380868806624,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dvj6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72942788-0171-4c87-ae0e-f4186897c5ed,},Annotations:map[string]string{io.kubernetes.container.hash: 18f0bc1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:601b388332cd73974ddc55444fcb80f099792b1b4039b097a850f49a112fb76b,PodSandboxId:535f975023e1c65583a409089
5f15c3a0cfe91b01c70c2ded3ef4a6126ac14a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713224360652834729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-045739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d7f1a5d447e4fd661dd454c4deec8,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:418c46e35bdc006fe24522769b87a68b44e93173f6b483a29ee4503bc6633d4a,PodSandboxId:efacf604034b89ffdfc097aaf95cee0a684be92afd
8c908c20786cdda3ab758a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713224360659963556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-045739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae18745d11d504180f8c872d581b2cc5,},Annotations:map[string]string{io.kubernetes.container.hash: 9e54e440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491d51cd3487250ea0ed0b10e219d71bfc96befee4cece6cc7679d953bfc71e2,PodSandboxId:8119e1dfb2cca61300a5c8b2a5bd130e634de5f0e1260690afcd92007b9dd0ee,Metadata:&ContainerMetad
ata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713224360666117151,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-045739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5996b8cfce3fc71cf23f74f19ce50227,},Annotations:map[string]string{io.kubernetes.container.hash: 79af6f04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840cb49a2f995c682c2b633b3a541a6c6ecbd1b7cdc75f91dbb8e7d731af3dad,PodSandboxId:e48ce340dd2efbf2859bfc364a3ec7c5ee1860e9f6cc7ea13ec49fe1c73546fc,Metadata:&ContainerMetadata{Name:kube-con
troller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713224360594356004,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-045739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e142bb86a5406e83f68add3155ed1539,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22facc95-6450-4063-9453-c43712f20c9d name=/runtime.v1.RuntimeService/ListContainers
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.011854680Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c1d78a6-ec4e-4a7c-ae07-ed635776a4d0 name=/runtime.v1.RuntimeService/Version
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.011946483Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c1d78a6-ec4e-4a7c-ae07-ed635776a4d0 name=/runtime.v1.RuntimeService/Version
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.013544322Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a38cb4e8-05c8-4d39-b6b7-21f7e2844d20 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.014852508Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713224721014820682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573324,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a38cb4e8-05c8-4d39-b6b7-21f7e2844d20 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.015599713Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1741d2b7-99a2-404f-89c0-f919e0665f7d name=/runtime.v1.RuntimeService/ListContainers
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.015805020Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1741d2b7-99a2-404f-89c0-f919e0665f7d name=/runtime.v1.RuntimeService/ListContainers
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.016278734Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c62e7ff1bba821ae80c5ea592a128fa7e923484866492f17ab0868c8f6066131,PodSandboxId:a6dc54e57390721824a50065f54530f9b8dc968ce212b9a263de3acda267496f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713224713931118163,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-489bl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3eb75e4b-11d4-4971-9180-8313eb46fde9,},Annotations:map[string]string{io.kubernetes.container.hash: e5fbd2e4,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:766d95e31a4c20e887ea0507498a14648dfa9763f4534917566cfff9c606549c,PodSandboxId:2c591744abdf098718ca96598758acc29b7488b36e69795383caf205cb94528b,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713224579138291912,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-4bkgx,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: b9e0936d-5b67-4653-8948-3379b22d134c,},Annota
tions:map[string]string{io.kubernetes.container.hash: d0e6c9f4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8835414324bf6cd452ae95072b9528fe297b58c24b27a0db2115453467a363e4,PodSandboxId:44af6e8c65e80108ce3e123ec8829ffaffa67a5ed874da08b95e409545b7d747,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1713224570976122804,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 690e4a21-f628-4663-bbca-1e4a84f05ea5,},Annotations:map[string]string{io.kubernetes.container.hash: 4d50081b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2084061f3722f12eafc2909dba24108fc4379ef501908480487626286793672d,PodSandboxId:7cdad8480aa1876c121c891bf80df9042117895a363ff2ec0e555d7db1061c10,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713224538751008699,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-88fgb,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 95cc5e0c-643d-4f77-a53f-2ce7f755c4c4,},Annotations:map[string]string{io.kubernetes.container.hash: 5c0d1ab6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdeeb1e6000111bd230f0202d5505392730393eb915c55892a84369632f4425a,PodSandboxId:568e9760aeab5a37c9dffbd92e70be5cabc98088d40faddde6d2b156b58db199,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTA
INER_EXITED,CreatedAt:1713224459081374856,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4br4k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 12575f4a-7d1a-4076-894e-e4bf2900bf1a,},Annotations:map[string]string{io.kubernetes.container.hash: 52f1a113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b5a4c6a6fbf1fa918af56ebd2babf5de397f21e0e1169346b81f4a13ea293df,PodSandboxId:f7c52f8086e8a13811beba47e8aa55e09dd81006cc0622416deff73804703fd4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c
4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713224458967120456,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h9cfs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55e0d629-9343-4a80-99bc-b7027e1bbb7e,},Annotations:map[string]string{io.kubernetes.container.hash: 521e62f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5609b754a8b749500724af8e27782a9c309915df27ca8012c41c5fb1461794,PodSandboxId:a4e4f65b4e9bcdae5eda88016d3eade4cbadab343902efa6134888f0d888417d,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1713224454197342784,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-n6g9b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 1b344559-5993-46da-9d10-2d43f53cb585,},Annotations:map[string]string{io.kubernetes.container.hash: bf316bc2,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe94a642e91e437aaf7c31ee9f2323952498bd76d75882aa18162fe2c91ca85f,PodSandboxId:fd73ded136195d8c651ab2fde0d2aff7a9df32cb9b61f8baff0014459e76a991,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd9
6de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1713224437026811306,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-8pd96,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 50033a63-70d6-44c7-960f-b549de54a73d,},Annotations:map[string]string{io.kubernetes.container.hash: e92150f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87db3f3be812d12b1f2e74f9c47fc25d3518ebf3b5bf89ee4b0648e42063d5d8,PodSandboxId:9e10dafdf0679d61216198706ab601930530927852e129c170d35fcd86e5bb8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617
342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713224388764687055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: added6e4-2d08-4e57-848b-f1480badde64,},Annotations:map[string]string{io.kubernetes.container.hash: 2bb63857,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91342c5062078d47b75ae9bb9bbdefb905237eea8ff16729918d98a7461121c6,PodSandboxId:6228916ef8fac9d271de52b84190bea1d8efd1e83dd5dd6ad1bd2953418f0a35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0
0797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713224384545495575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4gr4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f77fd4f4-3fd3-4b4b-9faa-52eae1857106,},Annotations:map[string]string{io.kubernetes.container.hash: 451354e0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b64e265e85e310e7d386add578fec4e7a268fd54bd6d900720863aa95596f99,PodSa
ndboxId:290e14f4786b1e5aa2b8cf55073320136d4ce85b5fe03adfdb93dd7a88d0cd47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713224380868806624,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dvj6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72942788-0171-4c87-ae0e-f4186897c5ed,},Annotations:map[string]string{io.kubernetes.container.hash: 18f0bc1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:601b388332cd73974ddc55444fcb80f099792b1b4039b097a850f49a112fb76b,PodSandboxId:535f975023e1c65583a409089
5f15c3a0cfe91b01c70c2ded3ef4a6126ac14a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713224360652834729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-045739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d7f1a5d447e4fd661dd454c4deec8,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:418c46e35bdc006fe24522769b87a68b44e93173f6b483a29ee4503bc6633d4a,PodSandboxId:efacf604034b89ffdfc097aaf95cee0a684be92afd
8c908c20786cdda3ab758a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713224360659963556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-045739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae18745d11d504180f8c872d581b2cc5,},Annotations:map[string]string{io.kubernetes.container.hash: 9e54e440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491d51cd3487250ea0ed0b10e219d71bfc96befee4cece6cc7679d953bfc71e2,PodSandboxId:8119e1dfb2cca61300a5c8b2a5bd130e634de5f0e1260690afcd92007b9dd0ee,Metadata:&ContainerMetad
ata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713224360666117151,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-045739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5996b8cfce3fc71cf23f74f19ce50227,},Annotations:map[string]string{io.kubernetes.container.hash: 79af6f04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840cb49a2f995c682c2b633b3a541a6c6ecbd1b7cdc75f91dbb8e7d731af3dad,PodSandboxId:e48ce340dd2efbf2859bfc364a3ec7c5ee1860e9f6cc7ea13ec49fe1c73546fc,Metadata:&ContainerMetadata{Name:kube-con
troller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713224360594356004,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-045739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e142bb86a5406e83f68add3155ed1539,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1741d2b7-99a2-404f-89c0-f919e0665f7d name=/runtime.v1.RuntimeService/ListContainers
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.060549861Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=34763595-0957-42fa-8d64-433c1943801c name=/runtime.v1.RuntimeService/Version
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.060643918Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=34763595-0957-42fa-8d64-433c1943801c name=/runtime.v1.RuntimeService/Version
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.061998118Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e20324cd-f50d-4922-93c7-668e13bd90c7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.063754658Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713224721063721490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573324,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e20324cd-f50d-4922-93c7-668e13bd90c7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.064604463Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ede44847-2de1-41a4-8e0c-89d11980b099 name=/runtime.v1.RuntimeService/ListContainers
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.064661976Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ede44847-2de1-41a4-8e0c-89d11980b099 name=/runtime.v1.RuntimeService/ListContainers
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.065058792Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c62e7ff1bba821ae80c5ea592a128fa7e923484866492f17ab0868c8f6066131,PodSandboxId:a6dc54e57390721824a50065f54530f9b8dc968ce212b9a263de3acda267496f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713224713931118163,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-489bl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3eb75e4b-11d4-4971-9180-8313eb46fde9,},Annotations:map[string]string{io.kubernetes.container.hash: e5fbd2e4,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:766d95e31a4c20e887ea0507498a14648dfa9763f4534917566cfff9c606549c,PodSandboxId:2c591744abdf098718ca96598758acc29b7488b36e69795383caf205cb94528b,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713224579138291912,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-4bkgx,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: b9e0936d-5b67-4653-8948-3379b22d134c,},Annota
tions:map[string]string{io.kubernetes.container.hash: d0e6c9f4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8835414324bf6cd452ae95072b9528fe297b58c24b27a0db2115453467a363e4,PodSandboxId:44af6e8c65e80108ce3e123ec8829ffaffa67a5ed874da08b95e409545b7d747,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1713224570976122804,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 690e4a21-f628-4663-bbca-1e4a84f05ea5,},Annotations:map[string]string{io.kubernetes.container.hash: 4d50081b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2084061f3722f12eafc2909dba24108fc4379ef501908480487626286793672d,PodSandboxId:7cdad8480aa1876c121c891bf80df9042117895a363ff2ec0e555d7db1061c10,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713224538751008699,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-88fgb,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 95cc5e0c-643d-4f77-a53f-2ce7f755c4c4,},Annotations:map[string]string{io.kubernetes.container.hash: 5c0d1ab6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdeeb1e6000111bd230f0202d5505392730393eb915c55892a84369632f4425a,PodSandboxId:568e9760aeab5a37c9dffbd92e70be5cabc98088d40faddde6d2b156b58db199,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTA
INER_EXITED,CreatedAt:1713224459081374856,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4br4k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 12575f4a-7d1a-4076-894e-e4bf2900bf1a,},Annotations:map[string]string{io.kubernetes.container.hash: 52f1a113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b5a4c6a6fbf1fa918af56ebd2babf5de397f21e0e1169346b81f4a13ea293df,PodSandboxId:f7c52f8086e8a13811beba47e8aa55e09dd81006cc0622416deff73804703fd4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c
4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713224458967120456,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h9cfs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55e0d629-9343-4a80-99bc-b7027e1bbb7e,},Annotations:map[string]string{io.kubernetes.container.hash: 521e62f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5609b754a8b749500724af8e27782a9c309915df27ca8012c41c5fb1461794,PodSandboxId:a4e4f65b4e9bcdae5eda88016d3eade4cbadab343902efa6134888f0d888417d,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1713224454197342784,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-n6g9b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 1b344559-5993-46da-9d10-2d43f53cb585,},Annotations:map[string]string{io.kubernetes.container.hash: bf316bc2,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe94a642e91e437aaf7c31ee9f2323952498bd76d75882aa18162fe2c91ca85f,PodSandboxId:fd73ded136195d8c651ab2fde0d2aff7a9df32cb9b61f8baff0014459e76a991,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd9
6de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1713224437026811306,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-8pd96,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 50033a63-70d6-44c7-960f-b549de54a73d,},Annotations:map[string]string{io.kubernetes.container.hash: e92150f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87db3f3be812d12b1f2e74f9c47fc25d3518ebf3b5bf89ee4b0648e42063d5d8,PodSandboxId:9e10dafdf0679d61216198706ab601930530927852e129c170d35fcd86e5bb8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617
342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713224388764687055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: added6e4-2d08-4e57-848b-f1480badde64,},Annotations:map[string]string{io.kubernetes.container.hash: 2bb63857,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91342c5062078d47b75ae9bb9bbdefb905237eea8ff16729918d98a7461121c6,PodSandboxId:6228916ef8fac9d271de52b84190bea1d8efd1e83dd5dd6ad1bd2953418f0a35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0
0797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713224384545495575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4gr4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f77fd4f4-3fd3-4b4b-9faa-52eae1857106,},Annotations:map[string]string{io.kubernetes.container.hash: 451354e0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b64e265e85e310e7d386add578fec4e7a268fd54bd6d900720863aa95596f99,PodSa
ndboxId:290e14f4786b1e5aa2b8cf55073320136d4ce85b5fe03adfdb93dd7a88d0cd47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713224380868806624,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dvj6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72942788-0171-4c87-ae0e-f4186897c5ed,},Annotations:map[string]string{io.kubernetes.container.hash: 18f0bc1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:601b388332cd73974ddc55444fcb80f099792b1b4039b097a850f49a112fb76b,PodSandboxId:535f975023e1c65583a409089
5f15c3a0cfe91b01c70c2ded3ef4a6126ac14a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713224360652834729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-045739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d7f1a5d447e4fd661dd454c4deec8,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:418c46e35bdc006fe24522769b87a68b44e93173f6b483a29ee4503bc6633d4a,PodSandboxId:efacf604034b89ffdfc097aaf95cee0a684be92afd
8c908c20786cdda3ab758a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713224360659963556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-045739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae18745d11d504180f8c872d581b2cc5,},Annotations:map[string]string{io.kubernetes.container.hash: 9e54e440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491d51cd3487250ea0ed0b10e219d71bfc96befee4cece6cc7679d953bfc71e2,PodSandboxId:8119e1dfb2cca61300a5c8b2a5bd130e634de5f0e1260690afcd92007b9dd0ee,Metadata:&ContainerMetad
ata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713224360666117151,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-045739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5996b8cfce3fc71cf23f74f19ce50227,},Annotations:map[string]string{io.kubernetes.container.hash: 79af6f04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840cb49a2f995c682c2b633b3a541a6c6ecbd1b7cdc75f91dbb8e7d731af3dad,PodSandboxId:e48ce340dd2efbf2859bfc364a3ec7c5ee1860e9f6cc7ea13ec49fe1c73546fc,Metadata:&ContainerMetadata{Name:kube-con
troller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713224360594356004,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-045739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e142bb86a5406e83f68add3155ed1539,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ede44847-2de1-41a4-8e0c-89d11980b099 name=/runtime.v1.RuntimeService/ListContainers
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.112825580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b5aafc0d-4aa1-4e5e-9bca-36487416e2a3 name=/runtime.v1.RuntimeService/Version
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.113233547Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b5aafc0d-4aa1-4e5e-9bca-36487416e2a3 name=/runtime.v1.RuntimeService/Version
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.114918621Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a38a3ba-a7af-4bf4-9258-f553c376fed1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.116672388Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713224721116638933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:573324,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a38a3ba-a7af-4bf4-9258-f553c376fed1 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.117630711Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=361e9ed6-34d2-4833-b095-15322fdd7787 name=/runtime.v1.RuntimeService/ListContainers
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.117721353Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=361e9ed6-34d2-4833-b095-15322fdd7787 name=/runtime.v1.RuntimeService/ListContainers
	Apr 15 23:45:21 addons-045739 crio[687]: time="2024-04-15 23:45:21.118121233Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c62e7ff1bba821ae80c5ea592a128fa7e923484866492f17ab0868c8f6066131,PodSandboxId:a6dc54e57390721824a50065f54530f9b8dc968ce212b9a263de3acda267496f,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1713224713931118163,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-489bl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3eb75e4b-11d4-4971-9180-8313eb46fde9,},Annotations:map[string]string{io.kubernetes.container.hash: e5fbd2e4,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:766d95e31a4c20e887ea0507498a14648dfa9763f4534917566cfff9c606549c,PodSandboxId:2c591744abdf098718ca96598758acc29b7488b36e69795383caf205cb94528b,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7373e995f4086a9db4ce8b2f96af2c2ae7f319e3e7e2ebdc1291e9c50ae4437e,State:CONTAINER_RUNNING,CreatedAt:1713224579138291912,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5b77dbd7c4-4bkgx,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: b9e0936d-5b67-4653-8948-3379b22d134c,},Annota
tions:map[string]string{io.kubernetes.container.hash: d0e6c9f4,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8835414324bf6cd452ae95072b9528fe297b58c24b27a0db2115453467a363e4,PodSandboxId:44af6e8c65e80108ce3e123ec8829ffaffa67a5ed874da08b95e409545b7d747,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1713224570976122804,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: defaul
t,io.kubernetes.pod.uid: 690e4a21-f628-4663-bbca-1e4a84f05ea5,},Annotations:map[string]string{io.kubernetes.container.hash: 4d50081b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2084061f3722f12eafc2909dba24108fc4379ef501908480487626286793672d,PodSandboxId:7cdad8480aa1876c121c891bf80df9042117895a363ff2ec0e555d7db1061c10,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1713224538751008699,Labels:map[string]string{io.kubernetes.container.name: gcp-a
uth,io.kubernetes.pod.name: gcp-auth-7d69788767-88fgb,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 95cc5e0c-643d-4f77-a53f-2ce7f755c4c4,},Annotations:map[string]string{io.kubernetes.container.hash: 5c0d1ab6,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdeeb1e6000111bd230f0202d5505392730393eb915c55892a84369632f4425a,PodSandboxId:568e9760aeab5a37c9dffbd92e70be5cabc98088d40faddde6d2b156b58db199,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTA
INER_EXITED,CreatedAt:1713224459081374856,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4br4k,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 12575f4a-7d1a-4076-894e-e4bf2900bf1a,},Annotations:map[string]string{io.kubernetes.container.hash: 52f1a113,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b5a4c6a6fbf1fa918af56ebd2babf5de397f21e0e1169346b81f4a13ea293df,PodSandboxId:f7c52f8086e8a13811beba47e8aa55e09dd81006cc0622416deff73804703fd4,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c
4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1713224458967120456,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-h9cfs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 55e0d629-9343-4a80-99bc-b7027e1bbb7e,},Annotations:map[string]string{io.kubernetes.container.hash: 521e62f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c5609b754a8b749500724af8e27782a9c309915df27ca8012c41c5fb1461794,PodSandboxId:a4e4f65b4e9bcdae5eda88016d3eade4cbadab343902efa6134888f0d888417d,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259b
fb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1713224454197342784,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-n6g9b,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 1b344559-5993-46da-9d10-2d43f53cb585,},Annotations:map[string]string{io.kubernetes.container.hash: bf316bc2,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe94a642e91e437aaf7c31ee9f2323952498bd76d75882aa18162fe2c91ca85f,PodSandboxId:fd73ded136195d8c651ab2fde0d2aff7a9df32cb9b61f8baff0014459e76a991,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd9
6de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1713224437026811306,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-8pd96,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 50033a63-70d6-44c7-960f-b549de54a73d,},Annotations:map[string]string{io.kubernetes.container.hash: e92150f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87db3f3be812d12b1f2e74f9c47fc25d3518ebf3b5bf89ee4b0648e42063d5d8,PodSandboxId:9e10dafdf0679d61216198706ab601930530927852e129c170d35fcd86e5bb8f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617
342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713224388764687055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: added6e4-2d08-4e57-848b-f1480badde64,},Annotations:map[string]string{io.kubernetes.container.hash: 2bb63857,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91342c5062078d47b75ae9bb9bbdefb905237eea8ff16729918d98a7461121c6,PodSandboxId:6228916ef8fac9d271de52b84190bea1d8efd1e83dd5dd6ad1bd2953418f0a35,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c0
0797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713224384545495575,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4gr4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f77fd4f4-3fd3-4b4b-9faa-52eae1857106,},Annotations:map[string]string{io.kubernetes.container.hash: 451354e0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b64e265e85e310e7d386add578fec4e7a268fd54bd6d900720863aa95596f99,PodSa
ndboxId:290e14f4786b1e5aa2b8cf55073320136d4ce85b5fe03adfdb93dd7a88d0cd47,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713224380868806624,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dvj6w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 72942788-0171-4c87-ae0e-f4186897c5ed,},Annotations:map[string]string{io.kubernetes.container.hash: 18f0bc1f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:601b388332cd73974ddc55444fcb80f099792b1b4039b097a850f49a112fb76b,PodSandboxId:535f975023e1c65583a409089
5f15c3a0cfe91b01c70c2ded3ef4a6126ac14a1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713224360652834729,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-045739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 864d7f1a5d447e4fd661dd454c4deec8,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:418c46e35bdc006fe24522769b87a68b44e93173f6b483a29ee4503bc6633d4a,PodSandboxId:efacf604034b89ffdfc097aaf95cee0a684be92afd
8c908c20786cdda3ab758a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713224360659963556,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-045739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae18745d11d504180f8c872d581b2cc5,},Annotations:map[string]string{io.kubernetes.container.hash: 9e54e440,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:491d51cd3487250ea0ed0b10e219d71bfc96befee4cece6cc7679d953bfc71e2,PodSandboxId:8119e1dfb2cca61300a5c8b2a5bd130e634de5f0e1260690afcd92007b9dd0ee,Metadata:&ContainerMetad
ata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713224360666117151,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-045739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5996b8cfce3fc71cf23f74f19ce50227,},Annotations:map[string]string{io.kubernetes.container.hash: 79af6f04,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:840cb49a2f995c682c2b633b3a541a6c6ecbd1b7cdc75f91dbb8e7d731af3dad,PodSandboxId:e48ce340dd2efbf2859bfc364a3ec7c5ee1860e9f6cc7ea13ec49fe1c73546fc,Metadata:&ContainerMetadata{Name:kube-con
troller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713224360594356004,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-045739,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e142bb86a5406e83f68add3155ed1539,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=361e9ed6-34d2-4833-b095-15322fdd7787 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c62e7ff1bba82       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   a6dc54e573907       hello-world-app-5d77478584-489bl
	766d95e31a4c2       ghcr.io/headlamp-k8s/headlamp@sha256:9d84f30d4c5e54cdc40f63b060e93ba6a0cd8a4c05d28d7cda4cd14f6b56490f                        2 minutes ago       Running             headlamp                  0                   2c591744abdf0       headlamp-5b77dbd7c4-4bkgx
	8835414324bf6       docker.io/library/nginx@sha256:31bad00311cb5eeb8a6648beadcf67277a175da89989f14727420a80e2e76742                              2 minutes ago       Running             nginx                     0                   44af6e8c65e80       nginx
	2084061f3722f       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   7cdad8480aa18       gcp-auth-7d69788767-88fgb
	bdeeb1e600011       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   4 minutes ago       Exited              patch                     0                   568e9760aeab5       ingress-nginx-admission-patch-4br4k
	4b5a4c6a6fbf1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   4 minutes ago       Exited              create                    0                   f7c52f8086e8a       ingress-nginx-admission-create-h9cfs
	3c5609b754a8b       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              4 minutes ago       Running             yakd                      0                   a4e4f65b4e9bc       yakd-dashboard-9947fc6bf-n6g9b
	fe94a642e91e4       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   fd73ded136195       local-path-provisioner-78b46b4d5c-8pd96
	87db3f3be812d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Running             storage-provisioner       0                   9e10dafdf0679       storage-provisioner
	91342c5062078       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             5 minutes ago       Running             coredns                   0                   6228916ef8fac       coredns-76f75df574-4gr4t
	3b64e265e85e3       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                                             5 minutes ago       Running             kube-proxy                0                   290e14f4786b1       kube-proxy-dvj6w
	491d51cd34872       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                                             6 minutes ago       Running             kube-apiserver            0                   8119e1dfb2cca       kube-apiserver-addons-045739
	418c46e35bdc0       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             6 minutes ago       Running             etcd                      0                   efacf604034b8       etcd-addons-045739
	601b388332cd7       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                                             6 minutes ago       Running             kube-scheduler            0                   535f975023e1c       kube-scheduler-addons-045739
	840cb49a2f995       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                                             6 minutes ago       Running             kube-controller-manager   0                   e48ce340dd2ef       kube-controller-manager-addons-045739
	
	
	==> coredns [91342c5062078d47b75ae9bb9bbdefb905237eea8ff16729918d98a7461121c6] <==
	[INFO] 10.244.0.21:46889 - 1645 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000139987s
	[INFO] 10.244.0.21:33536 - 3115 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000154569s
	[INFO] 10.244.0.21:33536 - 47590 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046897s
	[INFO] 10.244.0.21:46889 - 30288 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000069898s
	[INFO] 10.244.0.21:33536 - 55965 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000055322s
	[INFO] 10.244.0.21:46889 - 854 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069827s
	[INFO] 10.244.0.21:46889 - 24539 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000139306s
	[INFO] 10.244.0.21:33536 - 27725 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046843s
	[INFO] 10.244.0.21:46889 - 56870 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000173968s
	[INFO] 10.244.0.21:33536 - 32243 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000026984s
	[INFO] 10.244.0.21:33536 - 27518 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000222496s
	[INFO] 10.244.0.21:34822 - 45423 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000112528s
	[INFO] 10.244.0.21:38008 - 8727 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000222798s
	[INFO] 10.244.0.21:38008 - 4075 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000108598s
	[INFO] 10.244.0.21:38008 - 10642 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000127615s
	[INFO] 10.244.0.21:34822 - 22093 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000072062s
	[INFO] 10.244.0.21:34822 - 51326 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000179403s
	[INFO] 10.244.0.21:38008 - 25626 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000085269s
	[INFO] 10.244.0.21:38008 - 45408 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000127784s
	[INFO] 10.244.0.21:34822 - 21616 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000067092s
	[INFO] 10.244.0.21:34822 - 8966 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000280305s
	[INFO] 10.244.0.21:38008 - 60366 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000137961s
	[INFO] 10.244.0.21:34822 - 50913 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000201536s
	[INFO] 10.244.0.21:38008 - 16557 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000115583s
	[INFO] 10.244.0.21:34822 - 44093 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000211267s
	
	
	==> describe nodes <==
	Name:               addons-045739
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-045739
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=addons-045739
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T23_39_27_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-045739
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 23:39:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-045739
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 23:45:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 23:43:34 +0000   Mon, 15 Apr 2024 23:39:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 23:43:34 +0000   Mon, 15 Apr 2024 23:39:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 23:43:34 +0000   Mon, 15 Apr 2024 23:39:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 23:43:34 +0000   Mon, 15 Apr 2024 23:39:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.182
	  Hostname:    addons-045739
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 4d2e60e7e61648399bbcc451120441ba
	  System UUID:                4d2e60e7-e616-4839-9bbc-c451120441ba
	  Boot ID:                    e3104654-dcbb-4769-ab7e-c6bce18fbbe7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-489bl           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  gcp-auth                    gcp-auth-7d69788767-88fgb                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  headlamp                    headlamp-5b77dbd7c4-4bkgx                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 coredns-76f75df574-4gr4t                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m41s
	  kube-system                 etcd-addons-045739                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m53s
	  kube-system                 kube-apiserver-addons-045739               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	  kube-system                 kube-controller-manager-addons-045739      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  kube-system                 kube-proxy-dvj6w                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m42s
	  kube-system                 kube-scheduler-addons-045739               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m35s
	  local-path-storage          local-path-provisioner-78b46b4d5c-8pd96    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m35s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-n6g9b             0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m39s  kube-proxy       
	  Normal  Starting                 5m54s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m54s  kubelet          Node addons-045739 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m54s  kubelet          Node addons-045739 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m54s  kubelet          Node addons-045739 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m54s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m53s  kubelet          Node addons-045739 status is now: NodeReady
	  Normal  RegisteredNode           5m42s  node-controller  Node addons-045739 event: Registered Node addons-045739 in Controller
	
	
	==> dmesg <==
	[  +5.052804] kauditd_printk_skb: 81 callbacks suppressed
	[  +5.014663] kauditd_printk_skb: 87 callbacks suppressed
	[Apr15 23:40] kauditd_printk_skb: 66 callbacks suppressed
	[ +31.068981] kauditd_printk_skb: 4 callbacks suppressed
	[  +9.781276] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.329046] kauditd_printk_skb: 30 callbacks suppressed
	[Apr15 23:41] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.052046] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.357360] kauditd_printk_skb: 48 callbacks suppressed
	[ +11.158636] kauditd_printk_skb: 7 callbacks suppressed
	[ +28.962597] kauditd_printk_skb: 24 callbacks suppressed
	[Apr15 23:42] kauditd_printk_skb: 24 callbacks suppressed
	[  +8.326185] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.422367] kauditd_printk_skb: 16 callbacks suppressed
	[ +13.066013] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.061702] kauditd_printk_skb: 40 callbacks suppressed
	[  +8.603563] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.623445] kauditd_printk_skb: 48 callbacks suppressed
	[Apr15 23:43] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.097855] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.056588] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.435529] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.311879] kauditd_printk_skb: 25 callbacks suppressed
	[Apr15 23:45] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.068748] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [418c46e35bdc006fe24522769b87a68b44e93173f6b483a29ee4503bc6633d4a] <==
	{"level":"warn","ts":"2024-04-15T23:40:58.845462Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"275.389319ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11167"}
	{"level":"info","ts":"2024-04-15T23:40:58.845532Z","caller":"traceutil/trace.go:171","msg":"trace[1128157411] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1007; }","duration":"275.476496ms","start":"2024-04-15T23:40:58.570046Z","end":"2024-04-15T23:40:58.845523Z","steps":["trace[1128157411] 'agreement among raft nodes before linearized reading'  (duration: 275.370265ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T23:40:58.845391Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"175.016817ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85270"}
	{"level":"info","ts":"2024-04-15T23:40:58.845678Z","caller":"traceutil/trace.go:171","msg":"trace[1911494917] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1007; }","duration":"175.334617ms","start":"2024-04-15T23:40:58.670332Z","end":"2024-04-15T23:40:58.845666Z","steps":["trace[1911494917] 'agreement among raft nodes before linearized reading'  (duration: 174.888322ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T23:40:58.845813Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.694263ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14077"}
	{"level":"info","ts":"2024-04-15T23:40:58.846547Z","caller":"traceutil/trace.go:171","msg":"trace[227247739] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1007; }","duration":"126.452293ms","start":"2024-04-15T23:40:58.720081Z","end":"2024-04-15T23:40:58.846534Z","steps":["trace[227247739] 'agreement among raft nodes before linearized reading'  (duration: 125.652159ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-15T23:41:15.897353Z","caller":"traceutil/trace.go:171","msg":"trace[422238727] linearizableReadLoop","detail":"{readStateIndex:1160; appliedIndex:1159; }","duration":"108.831355ms","start":"2024-04-15T23:41:15.788468Z","end":"2024-04-15T23:41:15.897299Z","steps":["trace[422238727] 'read index received'  (duration: 108.557282ms)","trace[422238727] 'applied index is now lower than readState.Index'  (duration: 273.255µs)"],"step_count":2}
	{"level":"info","ts":"2024-04-15T23:41:15.897919Z","caller":"traceutil/trace.go:171","msg":"trace[1582214478] transaction","detail":"{read_only:false; response_revision:1125; number_of_response:1; }","duration":"131.772198ms","start":"2024-04-15T23:41:15.766119Z","end":"2024-04-15T23:41:15.897892Z","steps":["trace[1582214478] 'process raft request'  (duration: 130.970174ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T23:41:15.898296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.684513ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-15T23:41:15.898346Z","caller":"traceutil/trace.go:171","msg":"trace[2133352386] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1125; }","duration":"109.933448ms","start":"2024-04-15T23:41:15.7884Z","end":"2024-04-15T23:41:15.898334Z","steps":["trace[2133352386] 'agreement among raft nodes before linearized reading'  (duration: 109.695336ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T23:42:11.570004Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":15454540706758385915,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-04-15T23:42:11.609367Z","caller":"traceutil/trace.go:171","msg":"trace[1283788472] transaction","detail":"{read_only:false; response_revision:1230; number_of_response:1; }","duration":"543.332979ms","start":"2024-04-15T23:42:11.065947Z","end":"2024-04-15T23:42:11.60928Z","steps":["trace[1283788472] 'process raft request'  (duration: 542.974754ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T23:42:11.610504Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T23:42:11.065925Z","time spent":"544.219211ms","remote":"127.0.0.1:48126","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":538,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1223 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:451 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-04-15T23:42:11.622674Z","caller":"traceutil/trace.go:171","msg":"trace[1821216864] linearizableReadLoop","detail":"{readStateIndex:1279; appliedIndex:1278; }","duration":"553.360994ms","start":"2024-04-15T23:42:11.069291Z","end":"2024-04-15T23:42:11.622652Z","steps":["trace[1821216864] 'read index received'  (duration: 540.681424ms)","trace[1821216864] 'applied index is now lower than readState.Index'  (duration: 12.67865ms)"],"step_count":2}
	{"level":"warn","ts":"2024-04-15T23:42:11.623031Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"553.723543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4367"}
	{"level":"info","ts":"2024-04-15T23:42:11.623065Z","caller":"traceutil/trace.go:171","msg":"trace[1403892246] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1230; }","duration":"553.797287ms","start":"2024-04-15T23:42:11.069258Z","end":"2024-04-15T23:42:11.623055Z","steps":["trace[1403892246] 'agreement among raft nodes before linearized reading'  (duration: 553.682702ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T23:42:11.623098Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T23:42:11.069241Z","time spent":"553.852287ms","remote":"127.0.0.1:48032","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":4390,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-04-15T23:42:11.623427Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"406.850338ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-04-15T23:42:11.62345Z","caller":"traceutil/trace.go:171","msg":"trace[976948694] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1230; }","duration":"406.902038ms","start":"2024-04-15T23:42:11.216541Z","end":"2024-04-15T23:42:11.623443Z","steps":["trace[976948694] 'agreement among raft nodes before linearized reading'  (duration: 406.816088ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T23:42:11.623472Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T23:42:11.216523Z","time spent":"406.943898ms","remote":"127.0.0.1:48032","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14386,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2024-04-15T23:42:18.211417Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.446066ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4367"}
	{"level":"warn","ts":"2024-04-15T23:42:18.211584Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"423.250504ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-15T23:42:18.211647Z","caller":"traceutil/trace.go:171","msg":"trace[60654860] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1253; }","duration":"423.365108ms","start":"2024-04-15T23:42:17.788272Z","end":"2024-04-15T23:42:18.211637Z","steps":["trace[60654860] 'range keys from in-memory index tree'  (duration: 423.162351ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-15T23:42:18.211704Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-15T23:42:17.788223Z","time spent":"423.470689ms","remote":"127.0.0.1:47838","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-04-15T23:42:18.211563Z","caller":"traceutil/trace.go:171","msg":"trace[675641354] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1253; }","duration":"144.648491ms","start":"2024-04-15T23:42:18.066898Z","end":"2024-04-15T23:42:18.211547Z","steps":["trace[675641354] 'range keys from in-memory index tree'  (duration: 144.315538ms)"],"step_count":1}
	
	
	==> gcp-auth [2084061f3722f12eafc2909dba24108fc4379ef501908480487626286793672d] <==
	2024/04/15 23:42:26 Ready to write response ...
	2024/04/15 23:42:32 Ready to marshal response ...
	2024/04/15 23:42:32 Ready to write response ...
	2024/04/15 23:42:33 Ready to marshal response ...
	2024/04/15 23:42:33 Ready to write response ...
	2024/04/15 23:42:37 Ready to marshal response ...
	2024/04/15 23:42:37 Ready to write response ...
	2024/04/15 23:42:39 Ready to marshal response ...
	2024/04/15 23:42:39 Ready to write response ...
	2024/04/15 23:42:47 Ready to marshal response ...
	2024/04/15 23:42:47 Ready to write response ...
	2024/04/15 23:42:47 Ready to marshal response ...
	2024/04/15 23:42:47 Ready to write response ...
	2024/04/15 23:42:47 Ready to marshal response ...
	2024/04/15 23:42:47 Ready to write response ...
	2024/04/15 23:42:49 Ready to marshal response ...
	2024/04/15 23:42:49 Ready to write response ...
	2024/04/15 23:42:49 Ready to marshal response ...
	2024/04/15 23:42:49 Ready to write response ...
	2024/04/15 23:43:10 Ready to marshal response ...
	2024/04/15 23:43:10 Ready to write response ...
	2024/04/15 23:43:12 Ready to marshal response ...
	2024/04/15 23:43:12 Ready to write response ...
	2024/04/15 23:45:09 Ready to marshal response ...
	2024/04/15 23:45:09 Ready to write response ...
	
	
	==> kernel <==
	 23:45:21 up 6 min,  0 users,  load average: 1.81, 1.77, 0.93
	Linux addons-045739 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [491d51cd3487250ea0ed0b10e219d71bfc96befee4cece6cc7679d953bfc71e2] <==
	I0415 23:42:11.627358       1 trace.go:236] Trace[939968150]: "List" accept:application/json, */*,audit-id:f7dc6dea-e428-4c59-8524-aa7aa30a1fc7,client:192.168.39.1,api-group:,api-version:v1,name:,subresource:,namespace:gcp-auth,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/gcp-auth/pods,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:LIST (15-Apr-2024 23:42:11.068) (total time: 558ms):
	Trace[939968150]: ["List(recursive=true) etcd3" audit-id:f7dc6dea-e428-4c59-8524-aa7aa30a1fc7,key:/pods/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: 558ms (23:42:11.068)]
	Trace[939968150]: [558.84552ms] [558.84552ms] END
	E0415 23:42:30.894583       1 upgradeaware.go:425] Error proxying data from client to backend: read tcp 192.168.39.182:8443->10.244.0.23:56488: read: connection reset by peer
	I0415 23:42:32.649557       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0415 23:42:33.278061       1 handler.go:275] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0415 23:42:34.314967       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0415 23:42:39.243963       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0415 23:42:39.525368       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.154.25"}
	I0415 23:42:47.502086       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.107.70"}
	I0415 23:42:53.732138       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0415 23:43:27.735559       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0415 23:43:27.738540       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0415 23:43:27.793941       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0415 23:43:27.794015       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0415 23:43:27.805484       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0415 23:43:27.805777       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0415 23:43:27.907561       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0415 23:43:27.907803       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0415 23:43:27.993232       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0415 23:43:27.993290       1 handler.go:275] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0415 23:43:28.907242       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0415 23:43:28.993932       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0415 23:43:28.998050       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0415 23:45:09.317491       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.217.253"}
	
	
	==> kube-controller-manager [840cb49a2f995c682c2b633b3a541a6c6ecbd1b7cdc75f91dbb8e7d731af3dad] <==
	W0415 23:44:02.414335       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 23:44:02.414369       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0415 23:44:03.932408       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 23:44:03.932550       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0415 23:44:06.759755       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 23:44:06.759791       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0415 23:44:33.978283       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 23:44:33.978421       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0415 23:44:46.143946       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 23:44:46.144103       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0415 23:44:52.595780       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 23:44:52.595834       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0415 23:44:54.071372       1 reflector.go:539] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0415 23:44:54.071422       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0415 23:45:09.102623       1 event.go:376] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0415 23:45:09.144861       1 event.go:376] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-489bl"
	I0415 23:45:09.170959       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="68.409313ms"
	I0415 23:45:09.190914       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="19.233934ms"
	I0415 23:45:09.194080       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="48.357µs"
	I0415 23:45:09.201856       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="93.765µs"
	I0415 23:45:13.337618       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0415 23:45:13.352357       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-65496f9567" duration="9.722µs"
	I0415 23:45:13.365569       1 job_controller.go:554] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0415 23:45:14.501328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="15.46424ms"
	I0415 23:45:14.501452       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="61.953µs"
	
	
	==> kube-proxy [3b64e265e85e310e7d386add578fec4e7a268fd54bd6d900720863aa95596f99] <==
	I0415 23:39:42.035245       1 server_others.go:72] "Using iptables proxy"
	I0415 23:39:42.059938       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.182"]
	I0415 23:39:42.154645       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0415 23:39:42.154665       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0415 23:39:42.154679       1 server_others.go:168] "Using iptables Proxier"
	I0415 23:39:42.164831       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 23:39:42.165024       1 server.go:865] "Version info" version="v1.29.3"
	I0415 23:39:42.165035       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 23:39:42.166456       1 config.go:188] "Starting service config controller"
	I0415 23:39:42.166482       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 23:39:42.166501       1 config.go:97] "Starting endpoint slice config controller"
	I0415 23:39:42.166504       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 23:39:42.168536       1 config.go:315] "Starting node config controller"
	I0415 23:39:42.168548       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 23:39:42.267615       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0415 23:39:42.267643       1 shared_informer.go:318] Caches are synced for service config
	I0415 23:39:42.270779       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [601b388332cd73974ddc55444fcb80f099792b1b4039b097a850f49a112fb76b] <==
	W0415 23:39:23.974694       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 23:39:23.974728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0415 23:39:23.974779       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0415 23:39:23.974812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0415 23:39:23.974411       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0415 23:39:23.976923       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0415 23:39:24.909905       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 23:39:24.909964       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0415 23:39:24.983414       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0415 23:39:24.983481       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0415 23:39:25.047585       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0415 23:39:25.047646       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0415 23:39:25.085067       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0415 23:39:25.085268       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0415 23:39:25.101118       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0415 23:39:25.101342       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0415 23:39:25.162364       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0415 23:39:25.162424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0415 23:39:25.172827       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0415 23:39:25.172889       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0415 23:39:25.187789       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0415 23:39:25.187840       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0415 23:39:25.311742       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 23:39:25.311817       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0415 23:39:28.007669       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 15 23:45:09 addons-045739 kubelet[1285]: I0415 23:45:09.165231    1285 memory_manager.go:354] "RemoveStaleState removing state" podUID="25946b19-fb98-4698-be95-8d1f44b9f9b9" containerName="helper-pod"
	Apr 15 23:45:09 addons-045739 kubelet[1285]: I0415 23:45:09.165241    1285 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e7bbe3e-8e52-4b5a-aa90-caca8356db75" containerName="volume-snapshot-controller"
	Apr 15 23:45:09 addons-045739 kubelet[1285]: I0415 23:45:09.165272    1285 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2f75e3-b62a-4b19-b85f-969bb4c22f78" containerName="csi-provisioner"
	Apr 15 23:45:09 addons-045739 kubelet[1285]: I0415 23:45:09.165366    1285 memory_manager.go:354] "RemoveStaleState removing state" podUID="2210d05a-4960-4e6c-87e5-535a38e8cce7" containerName="csi-resizer"
	Apr 15 23:45:09 addons-045739 kubelet[1285]: I0415 23:45:09.165375    1285 memory_manager.go:354] "RemoveStaleState removing state" podUID="da2f75e3-b62a-4b19-b85f-969bb4c22f78" containerName="hostpath"
	Apr 15 23:45:09 addons-045739 kubelet[1285]: I0415 23:45:09.242519    1285 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3eb75e4b-11d4-4971-9180-8313eb46fde9-gcp-creds\") pod \"hello-world-app-5d77478584-489bl\" (UID: \"3eb75e4b-11d4-4971-9180-8313eb46fde9\") " pod="default/hello-world-app-5d77478584-489bl"
	Apr 15 23:45:09 addons-045739 kubelet[1285]: I0415 23:45:09.243094    1285 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9bx7\" (UniqueName: \"kubernetes.io/projected/3eb75e4b-11d4-4971-9180-8313eb46fde9-kube-api-access-f9bx7\") pod \"hello-world-app-5d77478584-489bl\" (UID: \"3eb75e4b-11d4-4971-9180-8313eb46fde9\") " pod="default/hello-world-app-5d77478584-489bl"
	Apr 15 23:45:10 addons-045739 kubelet[1285]: I0415 23:45:10.653150    1285 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2682l\" (UniqueName: \"kubernetes.io/projected/926cdd76-259c-482c-ae40-0b70c040a88d-kube-api-access-2682l\") pod \"926cdd76-259c-482c-ae40-0b70c040a88d\" (UID: \"926cdd76-259c-482c-ae40-0b70c040a88d\") "
	Apr 15 23:45:10 addons-045739 kubelet[1285]: I0415 23:45:10.656831    1285 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/926cdd76-259c-482c-ae40-0b70c040a88d-kube-api-access-2682l" (OuterVolumeSpecName: "kube-api-access-2682l") pod "926cdd76-259c-482c-ae40-0b70c040a88d" (UID: "926cdd76-259c-482c-ae40-0b70c040a88d"). InnerVolumeSpecName "kube-api-access-2682l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 15 23:45:10 addons-045739 kubelet[1285]: I0415 23:45:10.754539    1285 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2682l\" (UniqueName: \"kubernetes.io/projected/926cdd76-259c-482c-ae40-0b70c040a88d-kube-api-access-2682l\") on node \"addons-045739\" DevicePath \"\""
	Apr 15 23:45:11 addons-045739 kubelet[1285]: I0415 23:45:11.419408    1285 scope.go:117] "RemoveContainer" containerID="2275f66356c640f7c8c86397e0ebd21ce76f736eeae5637c4b8f1d20b07c9da3"
	Apr 15 23:45:11 addons-045739 kubelet[1285]: I0415 23:45:11.694285    1285 scope.go:117] "RemoveContainer" containerID="2275f66356c640f7c8c86397e0ebd21ce76f736eeae5637c4b8f1d20b07c9da3"
	Apr 15 23:45:11 addons-045739 kubelet[1285]: E0415 23:45:11.722938    1285 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2275f66356c640f7c8c86397e0ebd21ce76f736eeae5637c4b8f1d20b07c9da3\": container with ID starting with 2275f66356c640f7c8c86397e0ebd21ce76f736eeae5637c4b8f1d20b07c9da3 not found: ID does not exist" containerID="2275f66356c640f7c8c86397e0ebd21ce76f736eeae5637c4b8f1d20b07c9da3"
	Apr 15 23:45:11 addons-045739 kubelet[1285]: I0415 23:45:11.722998    1285 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2275f66356c640f7c8c86397e0ebd21ce76f736eeae5637c4b8f1d20b07c9da3"} err="failed to get container status \"2275f66356c640f7c8c86397e0ebd21ce76f736eeae5637c4b8f1d20b07c9da3\": rpc error: code = NotFound desc = could not find container \"2275f66356c640f7c8c86397e0ebd21ce76f736eeae5637c4b8f1d20b07c9da3\": container with ID starting with 2275f66356c640f7c8c86397e0ebd21ce76f736eeae5637c4b8f1d20b07c9da3 not found: ID does not exist"
	Apr 15 23:45:11 addons-045739 kubelet[1285]: I0415 23:45:11.888456    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="926cdd76-259c-482c-ae40-0b70c040a88d" path="/var/lib/kubelet/pods/926cdd76-259c-482c-ae40-0b70c040a88d/volumes"
	Apr 15 23:45:13 addons-045739 kubelet[1285]: I0415 23:45:13.891111    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12575f4a-7d1a-4076-894e-e4bf2900bf1a" path="/var/lib/kubelet/pods/12575f4a-7d1a-4076-894e-e4bf2900bf1a/volumes"
	Apr 15 23:45:13 addons-045739 kubelet[1285]: I0415 23:45:13.891936    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55e0d629-9343-4a80-99bc-b7027e1bbb7e" path="/var/lib/kubelet/pods/55e0d629-9343-4a80-99bc-b7027e1bbb7e/volumes"
	Apr 15 23:45:16 addons-045739 kubelet[1285]: I0415 23:45:16.719745    1285 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01f2dd27-e1ae-4844-8208-c012721b4a17-webhook-cert\") pod \"01f2dd27-e1ae-4844-8208-c012721b4a17\" (UID: \"01f2dd27-e1ae-4844-8208-c012721b4a17\") "
	Apr 15 23:45:16 addons-045739 kubelet[1285]: I0415 23:45:16.719842    1285 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kdr2\" (UniqueName: \"kubernetes.io/projected/01f2dd27-e1ae-4844-8208-c012721b4a17-kube-api-access-5kdr2\") pod \"01f2dd27-e1ae-4844-8208-c012721b4a17\" (UID: \"01f2dd27-e1ae-4844-8208-c012721b4a17\") "
	Apr 15 23:45:16 addons-045739 kubelet[1285]: I0415 23:45:16.725605    1285 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f2dd27-e1ae-4844-8208-c012721b4a17-kube-api-access-5kdr2" (OuterVolumeSpecName: "kube-api-access-5kdr2") pod "01f2dd27-e1ae-4844-8208-c012721b4a17" (UID: "01f2dd27-e1ae-4844-8208-c012721b4a17"). InnerVolumeSpecName "kube-api-access-5kdr2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Apr 15 23:45:16 addons-045739 kubelet[1285]: I0415 23:45:16.731016    1285 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f2dd27-e1ae-4844-8208-c012721b4a17-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "01f2dd27-e1ae-4844-8208-c012721b4a17" (UID: "01f2dd27-e1ae-4844-8208-c012721b4a17"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Apr 15 23:45:16 addons-045739 kubelet[1285]: I0415 23:45:16.820667    1285 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01f2dd27-e1ae-4844-8208-c012721b4a17-webhook-cert\") on node \"addons-045739\" DevicePath \"\""
	Apr 15 23:45:16 addons-045739 kubelet[1285]: I0415 23:45:16.820830    1285 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5kdr2\" (UniqueName: \"kubernetes.io/projected/01f2dd27-e1ae-4844-8208-c012721b4a17-kube-api-access-5kdr2\") on node \"addons-045739\" DevicePath \"\""
	Apr 15 23:45:17 addons-045739 kubelet[1285]: I0415 23:45:17.493005    1285 scope.go:117] "RemoveContainer" containerID="25b4aa89d08a7fafc0964d5a989a6aa2038c8a361b21022ba9fb43a11955478b"
	Apr 15 23:45:17 addons-045739 kubelet[1285]: I0415 23:45:17.890537    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01f2dd27-e1ae-4844-8208-c012721b4a17" path="/var/lib/kubelet/pods/01f2dd27-e1ae-4844-8208-c012721b4a17/volumes"
	
	
	==> storage-provisioner [87db3f3be812d12b1f2e74f9c47fc25d3518ebf3b5bf89ee4b0648e42063d5d8] <==
	I0415 23:39:50.571061       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0415 23:39:50.651674       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0415 23:39:50.651769       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0415 23:39:50.673668       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0415 23:39:50.676478       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-045739_f5843b6f-4492-4bd2-abef-94850f54de7c!
	I0415 23:39:50.677909       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3915f260-867f-4621-a077-fec4d01bc1ac", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-045739_f5843b6f-4492-4bd2-abef-94850f54de7c became leader
	I0415 23:39:50.888468       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-045739_f5843b6f-4492-4bd2-abef-94850f54de7c!
	E0415 23:43:12.645313       1 controller.go:1050] claim "f01537f6-92ca-4150-b63c-0f2e634b097f" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-045739 -n addons-045739
helpers_test.go:261: (dbg) Run:  kubectl --context addons-045739 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (163.62s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-045739
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-045739: exit status 82 (2m0.551664894s)

                                                
                                                
-- stdout --
	* Stopping node "addons-045739"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-045739" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-045739
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-045739: exit status 11 (21.600482496s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-045739" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-045739
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-045739: exit status 11 (6.144295681s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-045739" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-045739
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-045739: exit status 11 (6.142878217s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.182:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-045739" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 image save gcr.io/google-containers/addon-resizer:functional-596616 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-596616 image save gcr.io/google-containers/addon-resizer:functional-596616 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.7168531s)
functional_test.go:385: expected "/home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0415 23:54:38.736686   25075 out.go:291] Setting OutFile to fd 1 ...
	I0415 23:54:38.736808   25075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:54:38.736817   25075 out.go:304] Setting ErrFile to fd 2...
	I0415 23:54:38.736821   25075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:54:38.737006   25075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0415 23:54:38.737635   25075 config.go:182] Loaded profile config "functional-596616": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:54:38.737762   25075 config.go:182] Loaded profile config "functional-596616": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:54:38.738212   25075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:54:38.738257   25075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:54:38.752109   25075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44423
	I0415 23:54:38.752608   25075 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:54:38.753228   25075 main.go:141] libmachine: Using API Version  1
	I0415 23:54:38.753254   25075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:54:38.753585   25075 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:54:38.753794   25075 main.go:141] libmachine: (functional-596616) Calling .GetState
	I0415 23:54:38.755643   25075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:54:38.755712   25075 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:54:38.769311   25075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37807
	I0415 23:54:38.769735   25075 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:54:38.770200   25075 main.go:141] libmachine: Using API Version  1
	I0415 23:54:38.770219   25075 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:54:38.770588   25075 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:54:38.770788   25075 main.go:141] libmachine: (functional-596616) Calling .DriverName
	I0415 23:54:38.771029   25075 ssh_runner.go:195] Run: systemctl --version
	I0415 23:54:38.771060   25075 main.go:141] libmachine: (functional-596616) Calling .GetSSHHostname
	I0415 23:54:38.773273   25075 main.go:141] libmachine: (functional-596616) DBG | domain functional-596616 has defined MAC address 52:54:00:2d:fc:0d in network mk-functional-596616
	I0415 23:54:38.773673   25075 main.go:141] libmachine: (functional-596616) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:fc:0d", ip: ""} in network mk-functional-596616: {Iface:virbr1 ExpiryTime:2024-04-16 00:49:23 +0000 UTC Type:0 Mac:52:54:00:2d:fc:0d Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:functional-596616 Clientid:01:52:54:00:2d:fc:0d}
	I0415 23:54:38.773707   25075 main.go:141] libmachine: (functional-596616) DBG | domain functional-596616 has defined IP address 192.168.39.86 and MAC address 52:54:00:2d:fc:0d in network mk-functional-596616
	I0415 23:54:38.773842   25075 main.go:141] libmachine: (functional-596616) Calling .GetSSHPort
	I0415 23:54:38.773998   25075 main.go:141] libmachine: (functional-596616) Calling .GetSSHKeyPath
	I0415 23:54:38.774135   25075 main.go:141] libmachine: (functional-596616) Calling .GetSSHUsername
	I0415 23:54:38.774298   25075 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/functional-596616/id_rsa Username:docker}
	I0415 23:54:38.855497   25075 cache_images.go:286] Loading image from: /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar
	W0415 23:54:38.855556   25075 cache_images.go:254] Failed to load cached images for profile functional-596616. make sure the profile is running. loading images: stat /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar: no such file or directory
	I0415 23:54:38.855593   25075 cache_images.go:262] succeeded pushing to: 
	I0415 23:54:38.855606   25075 cache_images.go:263] failed pushing to: functional-596616
	I0415 23:54:38.855633   25075 main.go:141] libmachine: Making call to close driver server
	I0415 23:54:38.855649   25075 main.go:141] libmachine: (functional-596616) Calling .Close
	I0415 23:54:38.855889   25075 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:54:38.855910   25075 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:54:38.855914   25075 main.go:141] libmachine: (functional-596616) DBG | Closing plugin on server side
	I0415 23:54:38.855919   25075 main.go:141] libmachine: Making call to close driver server
	I0415 23:54:38.855928   25075 main.go:141] libmachine: (functional-596616) Calling .Close
	I0415 23:54:38.856159   25075 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:54:38.856181   25075 main.go:141] libmachine: (functional-596616) DBG | Closing plugin on server side
	I0415 23:54:38.856195   25075 main.go:141] libmachine: Making call to close connection to plugin binary

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 node stop m02 -v=7 --alsologtostderr
E0415 23:59:39.641983   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0416 00:00:20.602884   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-694782 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.481367472s)

                                                
                                                
-- stdout --
	* Stopping node "ha-694782-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 23:59:26.943611   29455 out.go:291] Setting OutFile to fd 1 ...
	I0415 23:59:26.943785   29455 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:59:26.943801   29455 out.go:304] Setting ErrFile to fd 2...
	I0415 23:59:26.943807   29455 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:59:26.944083   29455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0415 23:59:26.944385   29455 mustload.go:65] Loading cluster: ha-694782
	I0415 23:59:26.944717   29455 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:59:26.944732   29455 stop.go:39] StopHost: ha-694782-m02
	I0415 23:59:26.945080   29455 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:59:26.945130   29455 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:59:26.959940   29455 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I0415 23:59:26.960359   29455 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:59:26.960869   29455 main.go:141] libmachine: Using API Version  1
	I0415 23:59:26.960891   29455 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:59:26.961240   29455 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:59:26.963540   29455 out.go:177] * Stopping node "ha-694782-m02"  ...
	I0415 23:59:26.964878   29455 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0415 23:59:26.964900   29455 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0415 23:59:26.965075   29455 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0415 23:59:26.965103   29455 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:59:26.967664   29455 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:59:26.968108   29455 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:59:26.968143   29455 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:59:26.968241   29455 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:59:26.968389   29455 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:59:26.968543   29455 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:59:26.968708   29455 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa Username:docker}
	I0415 23:59:27.059693   29455 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0415 23:59:27.116210   29455 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0415 23:59:27.171609   29455 main.go:141] libmachine: Stopping "ha-694782-m02"...
	I0415 23:59:27.171636   29455 main.go:141] libmachine: (ha-694782-m02) Calling .GetState
	I0415 23:59:27.173131   29455 main.go:141] libmachine: (ha-694782-m02) Calling .Stop
	I0415 23:59:27.176523   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 0/120
	I0415 23:59:28.178275   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 1/120
	I0415 23:59:29.179550   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 2/120
	I0415 23:59:30.180953   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 3/120
	I0415 23:59:31.182161   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 4/120
	I0415 23:59:32.184221   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 5/120
	I0415 23:59:33.185626   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 6/120
	I0415 23:59:34.187015   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 7/120
	I0415 23:59:35.188344   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 8/120
	I0415 23:59:36.189400   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 9/120
	I0415 23:59:37.191552   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 10/120
	I0415 23:59:38.192799   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 11/120
	I0415 23:59:39.194252   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 12/120
	I0415 23:59:40.195603   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 13/120
	I0415 23:59:41.197629   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 14/120
	I0415 23:59:42.199309   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 15/120
	I0415 23:59:43.200702   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 16/120
	I0415 23:59:44.202314   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 17/120
	I0415 23:59:45.204074   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 18/120
	I0415 23:59:46.205246   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 19/120
	I0415 23:59:47.206564   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 20/120
	I0415 23:59:48.208165   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 21/120
	I0415 23:59:49.209502   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 22/120
	I0415 23:59:50.211664   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 23/120
	I0415 23:59:51.213899   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 24/120
	I0415 23:59:52.215758   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 25/120
	I0415 23:59:53.217295   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 26/120
	I0415 23:59:54.219636   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 27/120
	I0415 23:59:55.221050   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 28/120
	I0415 23:59:56.222952   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 29/120
	I0415 23:59:57.224756   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 30/120
	I0415 23:59:58.226370   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 31/120
	I0415 23:59:59.227541   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 32/120
	I0416 00:00:00.229502   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 33/120
	I0416 00:00:01.231606   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 34/120
	I0416 00:00:02.233535   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 35/120
	I0416 00:00:03.235468   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 36/120
	I0416 00:00:04.236901   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 37/120
	I0416 00:00:05.238168   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 38/120
	I0416 00:00:06.239626   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 39/120
	I0416 00:00:07.241868   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 40/120
	I0416 00:00:08.243865   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 41/120
	I0416 00:00:09.245537   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 42/120
	I0416 00:00:10.247572   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 43/120
	I0416 00:00:11.248757   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 44/120
	I0416 00:00:12.250719   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 45/120
	I0416 00:00:13.251991   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 46/120
	I0416 00:00:14.253227   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 47/120
	I0416 00:00:15.254612   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 48/120
	I0416 00:00:16.255927   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 49/120
	I0416 00:00:17.257706   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 50/120
	I0416 00:00:18.259505   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 51/120
	I0416 00:00:19.261519   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 52/120
	I0416 00:00:20.263608   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 53/120
	I0416 00:00:21.264832   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 54/120
	I0416 00:00:22.266819   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 55/120
	I0416 00:00:23.268342   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 56/120
	I0416 00:00:24.269771   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 57/120
	I0416 00:00:25.271032   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 58/120
	I0416 00:00:26.272618   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 59/120
	I0416 00:00:27.274655   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 60/120
	I0416 00:00:28.276277   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 61/120
	I0416 00:00:29.277807   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 62/120
	I0416 00:00:30.279623   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 63/120
	I0416 00:00:31.280923   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 64/120
	I0416 00:00:32.282770   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 65/120
	I0416 00:00:33.283931   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 66/120
	I0416 00:00:34.285343   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 67/120
	I0416 00:00:35.286767   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 68/120
	I0416 00:00:36.288603   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 69/120
	I0416 00:00:37.290725   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 70/120
	I0416 00:00:38.292237   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 71/120
	I0416 00:00:39.294467   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 72/120
	I0416 00:00:40.296080   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 73/120
	I0416 00:00:41.297494   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 74/120
	I0416 00:00:42.299224   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 75/120
	I0416 00:00:43.300455   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 76/120
	I0416 00:00:44.302106   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 77/120
	I0416 00:00:45.303410   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 78/120
	I0416 00:00:46.305833   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 79/120
	I0416 00:00:47.307690   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 80/120
	I0416 00:00:48.309108   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 81/120
	I0416 00:00:49.310985   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 82/120
	I0416 00:00:50.312304   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 83/120
	I0416 00:00:51.314349   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 84/120
	I0416 00:00:52.315971   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 85/120
	I0416 00:00:53.318279   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 86/120
	I0416 00:00:54.319701   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 87/120
	I0416 00:00:55.321047   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 88/120
	I0416 00:00:56.323054   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 89/120
	I0416 00:00:57.324756   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 90/120
	I0416 00:00:58.326204   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 91/120
	I0416 00:00:59.327585   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 92/120
	I0416 00:01:00.328822   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 93/120
	I0416 00:01:01.330037   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 94/120
	I0416 00:01:02.331382   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 95/120
	I0416 00:01:03.332746   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 96/120
	I0416 00:01:04.334435   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 97/120
	I0416 00:01:05.335708   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 98/120
	I0416 00:01:06.336936   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 99/120
	I0416 00:01:07.338631   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 100/120
	I0416 00:01:08.339936   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 101/120
	I0416 00:01:09.341519   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 102/120
	I0416 00:01:10.343813   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 103/120
	I0416 00:01:11.345086   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 104/120
	I0416 00:01:12.347054   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 105/120
	I0416 00:01:13.348581   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 106/120
	I0416 00:01:14.350086   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 107/120
	I0416 00:01:15.351688   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 108/120
	I0416 00:01:16.353045   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 109/120
	I0416 00:01:17.354823   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 110/120
	I0416 00:01:18.355961   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 111/120
	I0416 00:01:19.357418   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 112/120
	I0416 00:01:20.359659   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 113/120
	I0416 00:01:21.360833   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 114/120
	I0416 00:01:22.362323   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 115/120
	I0416 00:01:23.363616   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 116/120
	I0416 00:01:24.365176   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 117/120
	I0416 00:01:25.367168   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 118/120
	I0416 00:01:26.368553   29455 main.go:141] libmachine: (ha-694782-m02) Waiting for machine to stop 119/120
	I0416 00:01:27.369966   29455 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0416 00:01:27.370161   29455 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-694782 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr
E0416 00:01:42.523136   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr: exit status 3 (19.066105819s)

                                                
                                                
-- stdout --
	ha-694782
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-694782-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:01:27.425211   29891 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:01:27.425324   29891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:01:27.425332   29891 out.go:304] Setting ErrFile to fd 2...
	I0416 00:01:27.425336   29891 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:01:27.425534   29891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:01:27.425712   29891 out.go:298] Setting JSON to false
	I0416 00:01:27.425737   29891 mustload.go:65] Loading cluster: ha-694782
	I0416 00:01:27.425851   29891 notify.go:220] Checking for updates...
	I0416 00:01:27.426089   29891 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:01:27.426103   29891 status.go:255] checking status of ha-694782 ...
	I0416 00:01:27.426464   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:27.426528   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:27.443715   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36635
	I0416 00:01:27.444127   29891 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:27.444693   29891 main.go:141] libmachine: Using API Version  1
	I0416 00:01:27.444717   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:27.445083   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:27.445282   29891 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0416 00:01:27.446898   29891 status.go:330] ha-694782 host status = "Running" (err=<nil>)
	I0416 00:01:27.446912   29891 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:01:27.447280   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:27.447328   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:27.462792   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44761
	I0416 00:01:27.463256   29891 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:27.463811   29891 main.go:141] libmachine: Using API Version  1
	I0416 00:01:27.463837   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:27.464210   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:27.464398   29891 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0416 00:01:27.467233   29891 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:01:27.467695   29891 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:01:27.467730   29891 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:01:27.467862   29891 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:01:27.468140   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:27.468173   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:27.484469   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34009
	I0416 00:01:27.484840   29891 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:27.485301   29891 main.go:141] libmachine: Using API Version  1
	I0416 00:01:27.485326   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:27.485638   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:27.485805   29891 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:01:27.485984   29891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:01:27.486016   29891 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:01:27.488450   29891 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:01:27.488868   29891 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:01:27.488897   29891 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:01:27.489085   29891 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:01:27.489295   29891 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:01:27.489462   29891 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:01:27.489581   29891 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0416 00:01:27.579335   29891 ssh_runner.go:195] Run: systemctl --version
	I0416 00:01:27.586406   29891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:01:27.602545   29891 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:01:27.602574   29891 api_server.go:166] Checking apiserver status ...
	I0416 00:01:27.602604   29891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:01:27.619827   29891 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup
	W0416 00:01:27.629803   29891 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:01:27.629860   29891 ssh_runner.go:195] Run: ls
	I0416 00:01:27.634506   29891 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:01:27.640973   29891 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:01:27.641002   29891 status.go:422] ha-694782 apiserver status = Running (err=<nil>)
	I0416 00:01:27.641015   29891 status.go:257] ha-694782 status: &{Name:ha-694782 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:01:27.641038   29891 status.go:255] checking status of ha-694782-m02 ...
	I0416 00:01:27.641462   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:27.641509   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:27.656184   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35709
	I0416 00:01:27.656598   29891 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:27.657099   29891 main.go:141] libmachine: Using API Version  1
	I0416 00:01:27.657124   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:27.657440   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:27.657624   29891 main.go:141] libmachine: (ha-694782-m02) Calling .GetState
	I0416 00:01:27.659088   29891 status.go:330] ha-694782-m02 host status = "Running" (err=<nil>)
	I0416 00:01:27.659104   29891 host.go:66] Checking if "ha-694782-m02" exists ...
	I0416 00:01:27.659391   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:27.659437   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:27.674295   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45715
	I0416 00:01:27.674647   29891 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:27.675092   29891 main.go:141] libmachine: Using API Version  1
	I0416 00:01:27.675117   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:27.675469   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:27.675655   29891 main.go:141] libmachine: (ha-694782-m02) Calling .GetIP
	I0416 00:01:27.678492   29891 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:01:27.678935   29891 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0416 00:01:27.678968   29891 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:01:27.679109   29891 host.go:66] Checking if "ha-694782-m02" exists ...
	I0416 00:01:27.679388   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:27.679430   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:27.693969   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45313
	I0416 00:01:27.694379   29891 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:27.694789   29891 main.go:141] libmachine: Using API Version  1
	I0416 00:01:27.694808   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:27.695089   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:27.695241   29891 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0416 00:01:27.695419   29891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:01:27.695440   29891 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0416 00:01:27.697911   29891 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:01:27.698251   29891 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0416 00:01:27.698273   29891 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:01:27.698418   29891 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0416 00:01:27.698558   29891 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0416 00:01:27.698686   29891 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0416 00:01:27.698792   29891 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa Username:docker}
	W0416 00:01:46.073344   29891 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.42:22: connect: no route to host
	W0416 00:01:46.073437   29891 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.42:22: connect: no route to host
	E0416 00:01:46.073451   29891 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.42:22: connect: no route to host
	I0416 00:01:46.073457   29891 status.go:257] ha-694782-m02 status: &{Name:ha-694782-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0416 00:01:46.073471   29891 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.42:22: connect: no route to host
	I0416 00:01:46.073477   29891 status.go:255] checking status of ha-694782-m03 ...
	I0416 00:01:46.073801   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:46.073840   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:46.088036   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41595
	I0416 00:01:46.088467   29891 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:46.088940   29891 main.go:141] libmachine: Using API Version  1
	I0416 00:01:46.088968   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:46.089297   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:46.089541   29891 main.go:141] libmachine: (ha-694782-m03) Calling .GetState
	I0416 00:01:46.091089   29891 status.go:330] ha-694782-m03 host status = "Running" (err=<nil>)
	I0416 00:01:46.091105   29891 host.go:66] Checking if "ha-694782-m03" exists ...
	I0416 00:01:46.091414   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:46.091456   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:46.105481   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36681
	I0416 00:01:46.105913   29891 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:46.106340   29891 main.go:141] libmachine: Using API Version  1
	I0416 00:01:46.106356   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:46.106625   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:46.106774   29891 main.go:141] libmachine: (ha-694782-m03) Calling .GetIP
	I0416 00:01:46.109572   29891 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:01:46.110015   29891 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0416 00:01:46.110059   29891 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:01:46.110219   29891 host.go:66] Checking if "ha-694782-m03" exists ...
	I0416 00:01:46.111038   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:46.111087   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:46.126923   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36331
	I0416 00:01:46.127330   29891 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:46.127772   29891 main.go:141] libmachine: Using API Version  1
	I0416 00:01:46.127792   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:46.128108   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:46.128311   29891 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0416 00:01:46.128501   29891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:01:46.128518   29891 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0416 00:01:46.131200   29891 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:01:46.131566   29891 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0416 00:01:46.131591   29891 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:01:46.131760   29891 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0416 00:01:46.131921   29891 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0416 00:01:46.132050   29891 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0416 00:01:46.132163   29891 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa Username:docker}
	I0416 00:01:46.215659   29891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:01:46.235544   29891 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:01:46.235571   29891 api_server.go:166] Checking apiserver status ...
	I0416 00:01:46.235600   29891 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:01:46.251754   29891 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1540/cgroup
	W0416 00:01:46.262469   29891 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1540/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:01:46.262530   29891 ssh_runner.go:195] Run: ls
	I0416 00:01:46.267299   29891 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:01:46.271629   29891 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:01:46.271649   29891 status.go:422] ha-694782-m03 apiserver status = Running (err=<nil>)
	I0416 00:01:46.271658   29891 status.go:257] ha-694782-m03 status: &{Name:ha-694782-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:01:46.271672   29891 status.go:255] checking status of ha-694782-m04 ...
	I0416 00:01:46.272000   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:46.272035   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:46.286328   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37815
	I0416 00:01:46.286717   29891 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:46.287126   29891 main.go:141] libmachine: Using API Version  1
	I0416 00:01:46.287146   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:46.287428   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:46.287573   29891 main.go:141] libmachine: (ha-694782-m04) Calling .GetState
	I0416 00:01:46.289125   29891 status.go:330] ha-694782-m04 host status = "Running" (err=<nil>)
	I0416 00:01:46.289143   29891 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:01:46.289529   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:46.289586   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:46.304513   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36631
	I0416 00:01:46.304851   29891 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:46.305286   29891 main.go:141] libmachine: Using API Version  1
	I0416 00:01:46.305306   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:46.305601   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:46.305817   29891 main.go:141] libmachine: (ha-694782-m04) Calling .GetIP
	I0416 00:01:46.308436   29891 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:01:46.308799   29891 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:58:43 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:01:46.308831   29891 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:01:46.308948   29891 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:01:46.309361   29891 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:46.309404   29891 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:46.323217   29891 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39897
	I0416 00:01:46.323530   29891 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:46.323952   29891 main.go:141] libmachine: Using API Version  1
	I0416 00:01:46.323975   29891 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:46.324250   29891 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:46.324432   29891 main.go:141] libmachine: (ha-694782-m04) Calling .DriverName
	I0416 00:01:46.324611   29891 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:01:46.324630   29891 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHHostname
	I0416 00:01:46.327283   29891 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:01:46.327630   29891 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:58:43 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:01:46.327660   29891 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:01:46.327810   29891 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHPort
	I0416 00:01:46.327985   29891 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHKeyPath
	I0416 00:01:46.328155   29891 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHUsername
	I0416 00:01:46.328288   29891 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m04/id_rsa Username:docker}
	I0416 00:01:46.414528   29891 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:01:46.434955   29891 status.go:257] ha-694782-m04 status: &{Name:ha-694782-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-694782 -n ha-694782
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-694782 logs -n 25: (1.467571024s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| cp      | ha-694782 cp ha-694782-m03:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4178900617/001/cp-test_ha-694782-m03.txt |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m03:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782:/home/docker/cp-test_ha-694782-m03_ha-694782.txt                       |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782 sudo cat                                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m03_ha-694782.txt                                 |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m03:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m02:/home/docker/cp-test_ha-694782-m03_ha-694782-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782-m02 sudo cat                                          | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m03_ha-694782-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m03:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04:/home/docker/cp-test_ha-694782-m03_ha-694782-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782-m04 sudo cat                                          | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m03_ha-694782-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-694782 cp testdata/cp-test.txt                                                | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4178900617/001/cp-test_ha-694782-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782:/home/docker/cp-test_ha-694782-m04_ha-694782.txt                       |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782 sudo cat                                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m04_ha-694782.txt                                 |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m02:/home/docker/cp-test_ha-694782-m04_ha-694782-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782-m02 sudo cat                                          | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m04_ha-694782-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m03:/home/docker/cp-test_ha-694782-m04_ha-694782-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782-m03 sudo cat                                          | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m04_ha-694782-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-694782 node stop m02 -v=7                                                     | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 23:54:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 23:54:50.606130   25488 out.go:291] Setting OutFile to fd 1 ...
	I0415 23:54:50.606240   25488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:54:50.606248   25488 out.go:304] Setting ErrFile to fd 2...
	I0415 23:54:50.606252   25488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:54:50.606460   25488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0415 23:54:50.607004   25488 out.go:298] Setting JSON to false
	I0415 23:54:50.607793   25488 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2235,"bootTime":1713223056,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 23:54:50.607851   25488 start.go:139] virtualization: kvm guest
	I0415 23:54:50.610026   25488 out.go:177] * [ha-694782] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0415 23:54:50.611788   25488 notify.go:220] Checking for updates...
	I0415 23:54:50.611805   25488 out.go:177]   - MINIKUBE_LOCATION=18647
	I0415 23:54:50.613178   25488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 23:54:50.614591   25488 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0415 23:54:50.615907   25488 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:54:50.617172   25488 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0415 23:54:50.618341   25488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 23:54:50.619658   25488 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 23:54:50.652307   25488 out.go:177] * Using the kvm2 driver based on user configuration
	I0415 23:54:50.653739   25488 start.go:297] selected driver: kvm2
	I0415 23:54:50.653767   25488 start.go:901] validating driver "kvm2" against <nil>
	I0415 23:54:50.653785   25488 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 23:54:50.654543   25488 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 23:54:50.654633   25488 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0415 23:54:50.668711   25488 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0415 23:54:50.668755   25488 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 23:54:50.669017   25488 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 23:54:50.669103   25488 cni.go:84] Creating CNI manager for ""
	I0415 23:54:50.669120   25488 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0415 23:54:50.669126   25488 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 23:54:50.669204   25488 start.go:340] cluster config:
	{Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0415 23:54:50.669347   25488 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 23:54:50.671062   25488 out.go:177] * Starting "ha-694782" primary control-plane node in "ha-694782" cluster
	I0415 23:54:50.672327   25488 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0415 23:54:50.672366   25488 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0415 23:54:50.672378   25488 cache.go:56] Caching tarball of preloaded images
	I0415 23:54:50.672455   25488 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0415 23:54:50.672467   25488 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0415 23:54:50.672859   25488 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0415 23:54:50.672882   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json: {Name:mkfb3d47f0b66cecdcf38640e2fb461a34cd00df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:54:50.673031   25488 start.go:360] acquireMachinesLock for ha-694782: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 23:54:50.673061   25488 start.go:364] duration metric: took 16.312µs to acquireMachinesLock for "ha-694782"
	I0415 23:54:50.673077   25488 start.go:93] Provisioning new machine with config: &{Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0415 23:54:50.673135   25488 start.go:125] createHost starting for "" (driver="kvm2")
	I0415 23:54:50.674828   25488 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 23:54:50.674949   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:54:50.674981   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:54:50.688574   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36705
	I0415 23:54:50.688970   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:54:50.689477   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:54:50.689501   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:54:50.689786   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:54:50.689950   25488 main.go:141] libmachine: (ha-694782) Calling .GetMachineName
	I0415 23:54:50.690098   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:54:50.690208   25488 start.go:159] libmachine.API.Create for "ha-694782" (driver="kvm2")
	I0415 23:54:50.690237   25488 client.go:168] LocalClient.Create starting
	I0415 23:54:50.690266   25488 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem
	I0415 23:54:50.690295   25488 main.go:141] libmachine: Decoding PEM data...
	I0415 23:54:50.690308   25488 main.go:141] libmachine: Parsing certificate...
	I0415 23:54:50.690361   25488 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem
	I0415 23:54:50.690379   25488 main.go:141] libmachine: Decoding PEM data...
	I0415 23:54:50.690389   25488 main.go:141] libmachine: Parsing certificate...
	I0415 23:54:50.690411   25488 main.go:141] libmachine: Running pre-create checks...
	I0415 23:54:50.690420   25488 main.go:141] libmachine: (ha-694782) Calling .PreCreateCheck
	I0415 23:54:50.690761   25488 main.go:141] libmachine: (ha-694782) Calling .GetConfigRaw
	I0415 23:54:50.691108   25488 main.go:141] libmachine: Creating machine...
	I0415 23:54:50.691121   25488 main.go:141] libmachine: (ha-694782) Calling .Create
	I0415 23:54:50.691235   25488 main.go:141] libmachine: (ha-694782) Creating KVM machine...
	I0415 23:54:50.692164   25488 main.go:141] libmachine: (ha-694782) DBG | found existing default KVM network
	I0415 23:54:50.692735   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:50.692624   25511 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0415 23:54:50.692765   25488 main.go:141] libmachine: (ha-694782) DBG | created network xml: 
	I0415 23:54:50.692781   25488 main.go:141] libmachine: (ha-694782) DBG | <network>
	I0415 23:54:50.692787   25488 main.go:141] libmachine: (ha-694782) DBG |   <name>mk-ha-694782</name>
	I0415 23:54:50.692792   25488 main.go:141] libmachine: (ha-694782) DBG |   <dns enable='no'/>
	I0415 23:54:50.692796   25488 main.go:141] libmachine: (ha-694782) DBG |   
	I0415 23:54:50.692805   25488 main.go:141] libmachine: (ha-694782) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0415 23:54:50.692831   25488 main.go:141] libmachine: (ha-694782) DBG |     <dhcp>
	I0415 23:54:50.692852   25488 main.go:141] libmachine: (ha-694782) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0415 23:54:50.692860   25488 main.go:141] libmachine: (ha-694782) DBG |     </dhcp>
	I0415 23:54:50.692871   25488 main.go:141] libmachine: (ha-694782) DBG |   </ip>
	I0415 23:54:50.692880   25488 main.go:141] libmachine: (ha-694782) DBG |   
	I0415 23:54:50.692898   25488 main.go:141] libmachine: (ha-694782) DBG | </network>
	I0415 23:54:50.692937   25488 main.go:141] libmachine: (ha-694782) DBG | 
	I0415 23:54:50.697386   25488 main.go:141] libmachine: (ha-694782) DBG | trying to create private KVM network mk-ha-694782 192.168.39.0/24...
	I0415 23:54:50.759459   25488 main.go:141] libmachine: (ha-694782) DBG | private KVM network mk-ha-694782 192.168.39.0/24 created
	I0415 23:54:50.759488   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:50.759414   25511 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:54:50.759622   25488 main.go:141] libmachine: (ha-694782) Setting up store path in /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782 ...
	I0415 23:54:50.759652   25488 main.go:141] libmachine: (ha-694782) Building disk image from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso
	I0415 23:54:50.759686   25488 main.go:141] libmachine: (ha-694782) Downloading /home/jenkins/minikube-integration/18647-7542/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 23:54:50.983326   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:50.983177   25511 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa...
	I0415 23:54:51.195175   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:51.195055   25511 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/ha-694782.rawdisk...
	I0415 23:54:51.195206   25488 main.go:141] libmachine: (ha-694782) DBG | Writing magic tar header
	I0415 23:54:51.195217   25488 main.go:141] libmachine: (ha-694782) DBG | Writing SSH key tar header
	I0415 23:54:51.195228   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:51.195162   25511 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782 ...
	I0415 23:54:51.195241   25488 main.go:141] libmachine: (ha-694782) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782
	I0415 23:54:51.195349   25488 main.go:141] libmachine: (ha-694782) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines
	I0415 23:54:51.195372   25488 main.go:141] libmachine: (ha-694782) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782 (perms=drwx------)
	I0415 23:54:51.195379   25488 main.go:141] libmachine: (ha-694782) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:54:51.195389   25488 main.go:141] libmachine: (ha-694782) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542
	I0415 23:54:51.195395   25488 main.go:141] libmachine: (ha-694782) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0415 23:54:51.195406   25488 main.go:141] libmachine: (ha-694782) DBG | Checking permissions on dir: /home/jenkins
	I0415 23:54:51.195411   25488 main.go:141] libmachine: (ha-694782) DBG | Checking permissions on dir: /home
	I0415 23:54:51.195421   25488 main.go:141] libmachine: (ha-694782) DBG | Skipping /home - not owner
	I0415 23:54:51.195431   25488 main.go:141] libmachine: (ha-694782) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines (perms=drwxr-xr-x)
	I0415 23:54:51.195444   25488 main.go:141] libmachine: (ha-694782) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube (perms=drwxr-xr-x)
	I0415 23:54:51.195454   25488 main.go:141] libmachine: (ha-694782) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542 (perms=drwxrwxr-x)
	I0415 23:54:51.195466   25488 main.go:141] libmachine: (ha-694782) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0415 23:54:51.195475   25488 main.go:141] libmachine: (ha-694782) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0415 23:54:51.195486   25488 main.go:141] libmachine: (ha-694782) Creating domain...
	I0415 23:54:51.196527   25488 main.go:141] libmachine: (ha-694782) define libvirt domain using xml: 
	I0415 23:54:51.196566   25488 main.go:141] libmachine: (ha-694782) <domain type='kvm'>
	I0415 23:54:51.196577   25488 main.go:141] libmachine: (ha-694782)   <name>ha-694782</name>
	I0415 23:54:51.196589   25488 main.go:141] libmachine: (ha-694782)   <memory unit='MiB'>2200</memory>
	I0415 23:54:51.196600   25488 main.go:141] libmachine: (ha-694782)   <vcpu>2</vcpu>
	I0415 23:54:51.196611   25488 main.go:141] libmachine: (ha-694782)   <features>
	I0415 23:54:51.196623   25488 main.go:141] libmachine: (ha-694782)     <acpi/>
	I0415 23:54:51.196633   25488 main.go:141] libmachine: (ha-694782)     <apic/>
	I0415 23:54:51.196645   25488 main.go:141] libmachine: (ha-694782)     <pae/>
	I0415 23:54:51.196662   25488 main.go:141] libmachine: (ha-694782)     
	I0415 23:54:51.196696   25488 main.go:141] libmachine: (ha-694782)   </features>
	I0415 23:54:51.196719   25488 main.go:141] libmachine: (ha-694782)   <cpu mode='host-passthrough'>
	I0415 23:54:51.196733   25488 main.go:141] libmachine: (ha-694782)   
	I0415 23:54:51.196743   25488 main.go:141] libmachine: (ha-694782)   </cpu>
	I0415 23:54:51.196751   25488 main.go:141] libmachine: (ha-694782)   <os>
	I0415 23:54:51.196763   25488 main.go:141] libmachine: (ha-694782)     <type>hvm</type>
	I0415 23:54:51.196773   25488 main.go:141] libmachine: (ha-694782)     <boot dev='cdrom'/>
	I0415 23:54:51.196785   25488 main.go:141] libmachine: (ha-694782)     <boot dev='hd'/>
	I0415 23:54:51.196799   25488 main.go:141] libmachine: (ha-694782)     <bootmenu enable='no'/>
	I0415 23:54:51.196814   25488 main.go:141] libmachine: (ha-694782)   </os>
	I0415 23:54:51.196826   25488 main.go:141] libmachine: (ha-694782)   <devices>
	I0415 23:54:51.196839   25488 main.go:141] libmachine: (ha-694782)     <disk type='file' device='cdrom'>
	I0415 23:54:51.196856   25488 main.go:141] libmachine: (ha-694782)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/boot2docker.iso'/>
	I0415 23:54:51.196869   25488 main.go:141] libmachine: (ha-694782)       <target dev='hdc' bus='scsi'/>
	I0415 23:54:51.196882   25488 main.go:141] libmachine: (ha-694782)       <readonly/>
	I0415 23:54:51.196901   25488 main.go:141] libmachine: (ha-694782)     </disk>
	I0415 23:54:51.196915   25488 main.go:141] libmachine: (ha-694782)     <disk type='file' device='disk'>
	I0415 23:54:51.196927   25488 main.go:141] libmachine: (ha-694782)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0415 23:54:51.196941   25488 main.go:141] libmachine: (ha-694782)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/ha-694782.rawdisk'/>
	I0415 23:54:51.196968   25488 main.go:141] libmachine: (ha-694782)       <target dev='hda' bus='virtio'/>
	I0415 23:54:51.196989   25488 main.go:141] libmachine: (ha-694782)     </disk>
	I0415 23:54:51.197010   25488 main.go:141] libmachine: (ha-694782)     <interface type='network'>
	I0415 23:54:51.197023   25488 main.go:141] libmachine: (ha-694782)       <source network='mk-ha-694782'/>
	I0415 23:54:51.197031   25488 main.go:141] libmachine: (ha-694782)       <model type='virtio'/>
	I0415 23:54:51.197044   25488 main.go:141] libmachine: (ha-694782)     </interface>
	I0415 23:54:51.197055   25488 main.go:141] libmachine: (ha-694782)     <interface type='network'>
	I0415 23:54:51.197093   25488 main.go:141] libmachine: (ha-694782)       <source network='default'/>
	I0415 23:54:51.197113   25488 main.go:141] libmachine: (ha-694782)       <model type='virtio'/>
	I0415 23:54:51.197123   25488 main.go:141] libmachine: (ha-694782)     </interface>
	I0415 23:54:51.197134   25488 main.go:141] libmachine: (ha-694782)     <serial type='pty'>
	I0415 23:54:51.197146   25488 main.go:141] libmachine: (ha-694782)       <target port='0'/>
	I0415 23:54:51.197171   25488 main.go:141] libmachine: (ha-694782)     </serial>
	I0415 23:54:51.197184   25488 main.go:141] libmachine: (ha-694782)     <console type='pty'>
	I0415 23:54:51.197199   25488 main.go:141] libmachine: (ha-694782)       <target type='serial' port='0'/>
	I0415 23:54:51.197222   25488 main.go:141] libmachine: (ha-694782)     </console>
	I0415 23:54:51.197232   25488 main.go:141] libmachine: (ha-694782)     <rng model='virtio'>
	I0415 23:54:51.197246   25488 main.go:141] libmachine: (ha-694782)       <backend model='random'>/dev/random</backend>
	I0415 23:54:51.197256   25488 main.go:141] libmachine: (ha-694782)     </rng>
	I0415 23:54:51.197267   25488 main.go:141] libmachine: (ha-694782)     
	I0415 23:54:51.197277   25488 main.go:141] libmachine: (ha-694782)     
	I0415 23:54:51.197295   25488 main.go:141] libmachine: (ha-694782)   </devices>
	I0415 23:54:51.197313   25488 main.go:141] libmachine: (ha-694782) </domain>
	I0415 23:54:51.197328   25488 main.go:141] libmachine: (ha-694782) 
	I0415 23:54:51.201777   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:35:5b:51 in network default
	I0415 23:54:51.202454   25488 main.go:141] libmachine: (ha-694782) Ensuring networks are active...
	I0415 23:54:51.202474   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:51.203123   25488 main.go:141] libmachine: (ha-694782) Ensuring network default is active
	I0415 23:54:51.203409   25488 main.go:141] libmachine: (ha-694782) Ensuring network mk-ha-694782 is active
	I0415 23:54:51.203979   25488 main.go:141] libmachine: (ha-694782) Getting domain xml...
	I0415 23:54:51.204605   25488 main.go:141] libmachine: (ha-694782) Creating domain...
	I0415 23:54:52.375923   25488 main.go:141] libmachine: (ha-694782) Waiting to get IP...
	I0415 23:54:52.376780   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:52.377171   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:52.377193   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:52.377133   25511 retry.go:31] will retry after 224.827585ms: waiting for machine to come up
	I0415 23:54:52.603557   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:52.603998   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:52.604028   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:52.603944   25511 retry.go:31] will retry after 374.072733ms: waiting for machine to come up
	I0415 23:54:52.979256   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:52.979640   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:52.979666   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:52.979605   25511 retry.go:31] will retry after 418.209312ms: waiting for machine to come up
	I0415 23:54:53.399075   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:53.399504   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:53.399530   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:53.399477   25511 retry.go:31] will retry after 586.006563ms: waiting for machine to come up
	I0415 23:54:53.987292   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:53.987709   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:53.987737   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:53.987682   25511 retry.go:31] will retry after 585.019145ms: waiting for machine to come up
	I0415 23:54:54.574356   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:54.574841   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:54.574881   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:54.574744   25511 retry.go:31] will retry after 693.591633ms: waiting for machine to come up
	I0415 23:54:55.269527   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:55.269989   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:55.270019   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:55.269932   25511 retry.go:31] will retry after 952.212929ms: waiting for machine to come up
	I0415 23:54:56.223471   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:56.223979   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:56.224024   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:56.223944   25511 retry.go:31] will retry after 1.09753914s: waiting for machine to come up
	I0415 23:54:57.323068   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:57.323533   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:57.323562   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:57.323486   25511 retry.go:31] will retry after 1.219162056s: waiting for machine to come up
	I0415 23:54:58.544818   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:58.545234   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:58.545264   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:58.545190   25511 retry.go:31] will retry after 1.688054549s: waiting for machine to come up
	I0415 23:55:00.234436   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:00.234954   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:55:00.234978   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:55:00.234918   25511 retry.go:31] will retry after 2.111494169s: waiting for machine to come up
	I0415 23:55:02.349084   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:02.349555   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:55:02.349582   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:55:02.349515   25511 retry.go:31] will retry after 2.352035476s: waiting for machine to come up
	I0415 23:55:04.704991   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:04.705417   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:55:04.705465   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:55:04.705380   25511 retry.go:31] will retry after 4.46217908s: waiting for machine to come up
	I0415 23:55:09.171025   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:09.171427   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:55:09.171457   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:55:09.171373   25511 retry.go:31] will retry after 5.185782553s: waiting for machine to come up
	I0415 23:55:14.361556   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.362012   25488 main.go:141] libmachine: (ha-694782) Found IP for machine: 192.168.39.41
	I0415 23:55:14.362044   25488 main.go:141] libmachine: (ha-694782) Reserving static IP address...
	I0415 23:55:14.362062   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has current primary IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.362420   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find host DHCP lease matching {name: "ha-694782", mac: "52:54:00:b4:cb:f8", ip: "192.168.39.41"} in network mk-ha-694782
	I0415 23:55:14.430861   25488 main.go:141] libmachine: (ha-694782) Reserved static IP address: 192.168.39.41
	I0415 23:55:14.430886   25488 main.go:141] libmachine: (ha-694782) Waiting for SSH to be available...
	I0415 23:55:14.430895   25488 main.go:141] libmachine: (ha-694782) DBG | Getting to WaitForSSH function...
	I0415 23:55:14.433318   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.433645   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:14.433674   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.433816   25488 main.go:141] libmachine: (ha-694782) DBG | Using SSH client type: external
	I0415 23:55:14.433837   25488 main.go:141] libmachine: (ha-694782) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa (-rw-------)
	I0415 23:55:14.433899   25488 main.go:141] libmachine: (ha-694782) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0415 23:55:14.433921   25488 main.go:141] libmachine: (ha-694782) DBG | About to run SSH command:
	I0415 23:55:14.433935   25488 main.go:141] libmachine: (ha-694782) DBG | exit 0
	I0415 23:55:14.565504   25488 main.go:141] libmachine: (ha-694782) DBG | SSH cmd err, output: <nil>: 
	I0415 23:55:14.565793   25488 main.go:141] libmachine: (ha-694782) KVM machine creation complete!
	I0415 23:55:14.566100   25488 main.go:141] libmachine: (ha-694782) Calling .GetConfigRaw
	I0415 23:55:14.566610   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:55:14.566767   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:55:14.566968   25488 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0415 23:55:14.566985   25488 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0415 23:55:14.568071   25488 main.go:141] libmachine: Detecting operating system of created instance...
	I0415 23:55:14.568085   25488 main.go:141] libmachine: Waiting for SSH to be available...
	I0415 23:55:14.568090   25488 main.go:141] libmachine: Getting to WaitForSSH function...
	I0415 23:55:14.568096   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:14.570429   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.570739   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:14.570789   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.570842   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:14.571017   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:14.571161   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:14.571312   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:14.571497   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:55:14.571722   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0415 23:55:14.571735   25488 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0415 23:55:14.684565   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 23:55:14.684586   25488 main.go:141] libmachine: Detecting the provisioner...
	I0415 23:55:14.684593   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:14.687560   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.687976   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:14.688024   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.688124   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:14.688336   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:14.688496   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:14.688650   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:14.688772   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:55:14.688924   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0415 23:55:14.688933   25488 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0415 23:55:14.802013   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0415 23:55:14.802112   25488 main.go:141] libmachine: found compatible host: buildroot
	I0415 23:55:14.802126   25488 main.go:141] libmachine: Provisioning with buildroot...
	I0415 23:55:14.802142   25488 main.go:141] libmachine: (ha-694782) Calling .GetMachineName
	I0415 23:55:14.802388   25488 buildroot.go:166] provisioning hostname "ha-694782"
	I0415 23:55:14.802411   25488 main.go:141] libmachine: (ha-694782) Calling .GetMachineName
	I0415 23:55:14.802594   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:14.804880   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.805196   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:14.805226   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.805346   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:14.805531   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:14.805686   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:14.805808   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:14.805939   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:55:14.806093   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0415 23:55:14.806105   25488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-694782 && echo "ha-694782" | sudo tee /etc/hostname
	I0415 23:55:14.939097   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-694782
	
	I0415 23:55:14.939122   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:14.941608   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.941926   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:14.941961   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.942109   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:14.942272   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:14.942416   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:14.942559   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:14.942714   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:55:14.942857   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0415 23:55:14.942872   25488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-694782' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-694782/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-694782' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 23:55:15.063677   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 23:55:15.063703   25488 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0415 23:55:15.063742   25488 buildroot.go:174] setting up certificates
	I0415 23:55:15.063750   25488 provision.go:84] configureAuth start
	I0415 23:55:15.063759   25488 main.go:141] libmachine: (ha-694782) Calling .GetMachineName
	I0415 23:55:15.064013   25488 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0415 23:55:15.066530   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.066860   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.066889   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.066993   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:15.069088   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.069416   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.069445   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.069606   25488 provision.go:143] copyHostCerts
	I0415 23:55:15.069633   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0415 23:55:15.069663   25488 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0415 23:55:15.069671   25488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0415 23:55:15.069735   25488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0415 23:55:15.069840   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0415 23:55:15.069859   25488 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0415 23:55:15.069864   25488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0415 23:55:15.069890   25488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0415 23:55:15.069983   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0415 23:55:15.070002   25488 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0415 23:55:15.070008   25488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0415 23:55:15.070030   25488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0415 23:55:15.070090   25488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.ha-694782 san=[127.0.0.1 192.168.39.41 ha-694782 localhost minikube]
	I0415 23:55:15.187615   25488 provision.go:177] copyRemoteCerts
	I0415 23:55:15.187670   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 23:55:15.187690   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:15.190182   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.190508   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.190534   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.190765   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:15.190934   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:15.191081   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:15.191241   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:55:15.275500   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0415 23:55:15.275557   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0415 23:55:15.300618   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0415 23:55:15.300671   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0415 23:55:15.324501   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0415 23:55:15.324558   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 23:55:15.353326   25488 provision.go:87] duration metric: took 289.565249ms to configureAuth
	I0415 23:55:15.353354   25488 buildroot.go:189] setting minikube options for container-runtime
	I0415 23:55:15.353553   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:55:15.353632   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:15.356289   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.356631   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.356654   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.356822   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:15.356967   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:15.357093   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:15.357254   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:15.357413   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:55:15.357562   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0415 23:55:15.357577   25488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0415 23:55:15.644902   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0415 23:55:15.644936   25488 main.go:141] libmachine: Checking connection to Docker...
	I0415 23:55:15.644946   25488 main.go:141] libmachine: (ha-694782) Calling .GetURL
	I0415 23:55:15.646292   25488 main.go:141] libmachine: (ha-694782) DBG | Using libvirt version 6000000
	I0415 23:55:15.648691   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.648986   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.649016   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.649207   25488 main.go:141] libmachine: Docker is up and running!
	I0415 23:55:15.649225   25488 main.go:141] libmachine: Reticulating splines...
	I0415 23:55:15.649233   25488 client.go:171] duration metric: took 24.958985907s to LocalClient.Create
	I0415 23:55:15.649255   25488 start.go:167] duration metric: took 24.959056749s to libmachine.API.Create "ha-694782"
	I0415 23:55:15.649267   25488 start.go:293] postStartSetup for "ha-694782" (driver="kvm2")
	I0415 23:55:15.649283   25488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 23:55:15.649303   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:55:15.649576   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 23:55:15.649615   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:15.651796   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.652094   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.652124   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.652232   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:15.652375   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:15.652489   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:15.652562   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:55:15.739914   25488 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 23:55:15.743991   25488 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 23:55:15.744009   25488 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0415 23:55:15.744093   25488 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0415 23:55:15.744167   25488 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0415 23:55:15.744176   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /etc/ssl/certs/148972.pem
	I0415 23:55:15.744265   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 23:55:15.754262   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0415 23:55:15.779399   25488 start.go:296] duration metric: took 130.115766ms for postStartSetup
	I0415 23:55:15.779460   25488 main.go:141] libmachine: (ha-694782) Calling .GetConfigRaw
	I0415 23:55:15.780056   25488 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0415 23:55:15.782419   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.782804   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.782825   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.783057   25488 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0415 23:55:15.783234   25488 start.go:128] duration metric: took 25.110089598s to createHost
	I0415 23:55:15.783255   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:15.785447   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.785727   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.785747   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.785876   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:15.786056   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:15.786216   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:15.786372   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:15.786532   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:55:15.786679   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0415 23:55:15.786693   25488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 23:55:15.897712   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713225315.873046844
	
	I0415 23:55:15.897732   25488 fix.go:216] guest clock: 1713225315.873046844
	I0415 23:55:15.897738   25488 fix.go:229] Guest: 2024-04-15 23:55:15.873046844 +0000 UTC Remote: 2024-04-15 23:55:15.78324668 +0000 UTC m=+25.222880995 (delta=89.800164ms)
	I0415 23:55:15.897755   25488 fix.go:200] guest clock delta is within tolerance: 89.800164ms
	I0415 23:55:15.897760   25488 start.go:83] releasing machines lock for "ha-694782", held for 25.224690951s
	I0415 23:55:15.897776   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:55:15.898024   25488 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0415 23:55:15.900296   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.900562   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.900584   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.900703   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:55:15.901150   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:55:15.901336   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:55:15.901390   25488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 23:55:15.901416   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:15.901532   25488 ssh_runner.go:195] Run: cat /version.json
	I0415 23:55:15.901552   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:15.904140   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.904239   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.904474   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.904498   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.904520   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.904576   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.904628   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:15.904810   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:15.904827   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:15.904972   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:15.904979   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:15.905139   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:15.905150   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:55:15.905287   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:55:15.986278   25488 ssh_runner.go:195] Run: systemctl --version
	I0415 23:55:16.016341   25488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0415 23:55:16.177768   25488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 23:55:16.184471   25488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 23:55:16.184546   25488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 23:55:16.200414   25488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 23:55:16.200437   25488 start.go:494] detecting cgroup driver to use...
	I0415 23:55:16.200486   25488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 23:55:16.216228   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 23:55:16.230211   25488 docker.go:217] disabling cri-docker service (if available) ...
	I0415 23:55:16.230270   25488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0415 23:55:16.243548   25488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0415 23:55:16.256840   25488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0415 23:55:16.378336   25488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0415 23:55:16.521593   25488 docker.go:233] disabling docker service ...
	I0415 23:55:16.521678   25488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0415 23:55:16.536397   25488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0415 23:55:16.549035   25488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0415 23:55:16.681131   25488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0415 23:55:16.806474   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0415 23:55:16.820636   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 23:55:16.839039   25488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0415 23:55:16.839089   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:55:16.848913   25488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0415 23:55:16.848969   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:55:16.859109   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:55:16.869053   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:55:16.879029   25488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 23:55:16.889245   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:55:16.899207   25488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:55:16.916484   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:55:16.926771   25488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 23:55:16.936287   25488 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0415 23:55:16.936361   25488 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0415 23:55:16.950617   25488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 23:55:16.962389   25488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:55:17.097679   25488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0415 23:55:17.232809   25488 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0415 23:55:17.232871   25488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0415 23:55:17.237690   25488 start.go:562] Will wait 60s for crictl version
	I0415 23:55:17.237789   25488 ssh_runner.go:195] Run: which crictl
	I0415 23:55:17.241636   25488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 23:55:17.280999   25488 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0415 23:55:17.281065   25488 ssh_runner.go:195] Run: crio --version
	I0415 23:55:17.309439   25488 ssh_runner.go:195] Run: crio --version
	I0415 23:55:17.339564   25488 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0415 23:55:17.340837   25488 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0415 23:55:17.343246   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:17.343523   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:17.343549   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:17.343708   25488 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0415 23:55:17.348000   25488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 23:55:17.363205   25488 kubeadm.go:877] updating cluster {Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 23:55:17.363323   25488 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0415 23:55:17.363376   25488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0415 23:55:17.402360   25488 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0415 23:55:17.402437   25488 ssh_runner.go:195] Run: which lz4
	I0415 23:55:17.406895   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0415 23:55:17.406975   25488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0415 23:55:17.411512   25488 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 23:55:17.411539   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0415 23:55:18.881825   25488 crio.go:462] duration metric: took 1.474872286s to copy over tarball
	I0415 23:55:18.881923   25488 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 23:55:21.112954   25488 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.230999706s)
	I0415 23:55:21.112993   25488 crio.go:469] duration metric: took 2.231139178s to extract the tarball
	I0415 23:55:21.113002   25488 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 23:55:21.150983   25488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0415 23:55:21.197262   25488 crio.go:514] all images are preloaded for cri-o runtime.
	I0415 23:55:21.197287   25488 cache_images.go:84] Images are preloaded, skipping loading
	I0415 23:55:21.197294   25488 kubeadm.go:928] updating node { 192.168.39.41 8443 v1.29.3 crio true true} ...
	I0415 23:55:21.197411   25488 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-694782 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 23:55:21.197502   25488 ssh_runner.go:195] Run: crio config
	I0415 23:55:21.248535   25488 cni.go:84] Creating CNI manager for ""
	I0415 23:55:21.248557   25488 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 23:55:21.248567   25488 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 23:55:21.248591   25488 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.41 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-694782 NodeName:ha-694782 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 23:55:21.248721   25488 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-694782"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.41
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.41"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 23:55:21.248752   25488 kube-vip.go:111] generating kube-vip config ...
	I0415 23:55:21.248795   25488 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0415 23:55:21.264955   25488 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0415 23:55:21.265054   25488 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0415 23:55:21.265099   25488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 23:55:21.275626   25488 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 23:55:21.275683   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0415 23:55:21.285586   25488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0415 23:55:21.302311   25488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 23:55:21.318800   25488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0415 23:55:21.335730   25488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0415 23:55:21.353231   25488 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0415 23:55:21.357247   25488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 23:55:21.369999   25488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:55:21.481623   25488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 23:55:21.499102   25488 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782 for IP: 192.168.39.41
	I0415 23:55:21.499128   25488 certs.go:194] generating shared ca certs ...
	I0415 23:55:21.499170   25488 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:55:21.499354   25488 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0415 23:55:21.499419   25488 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0415 23:55:21.499432   25488 certs.go:256] generating profile certs ...
	I0415 23:55:21.499496   25488 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.key
	I0415 23:55:21.499515   25488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.crt with IP's: []
	I0415 23:55:21.625470   25488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.crt ...
	I0415 23:55:21.625500   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.crt: {Name:mk07a742d69663069eab99b3131081c62709ce45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:55:21.625669   25488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.key ...
	I0415 23:55:21.625681   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.key: {Name:mk3ccbb0986e351adb4bf32ff85ba606547db2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:55:21.625754   25488 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.76980cb2
	I0415 23:55:21.625782   25488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.76980cb2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.41 192.168.39.254]
	I0415 23:55:21.818979   25488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.76980cb2 ...
	I0415 23:55:21.819005   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.76980cb2: {Name:mk6dcc18833f5ae29fe38a46dbdc51cffe578362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:55:21.819161   25488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.76980cb2 ...
	I0415 23:55:21.819176   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.76980cb2: {Name:mkc0beca4f2d7056d3c179d658e3ee6f22c7efc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:55:21.819241   25488 certs.go:381] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.76980cb2 -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt
	I0415 23:55:21.819337   25488 certs.go:385] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.76980cb2 -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key
	I0415 23:55:21.819397   25488 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key
	I0415 23:55:21.819413   25488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt with IP's: []
	I0415 23:55:21.908762   25488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt ...
	I0415 23:55:21.908790   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt: {Name:mkb0c783b3e2cc7bed15cd5d531f54fc8713aa8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:55:21.908929   25488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key ...
	I0415 23:55:21.908941   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key: {Name:mke1b4ddd8ce41af36e1be15fd39f5382986b8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:55:21.909003   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 23:55:21.909019   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0415 23:55:21.909029   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 23:55:21.909045   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 23:55:21.909057   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0415 23:55:21.909076   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0415 23:55:21.909088   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0415 23:55:21.909099   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0415 23:55:21.909142   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0415 23:55:21.909202   25488 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0415 23:55:21.909220   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0415 23:55:21.909245   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0415 23:55:21.909267   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0415 23:55:21.909295   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0415 23:55:21.909333   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0415 23:55:21.909364   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:55:21.909377   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem -> /usr/share/ca-certificates/14897.pem
	I0415 23:55:21.909389   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /usr/share/ca-certificates/148972.pem
	I0415 23:55:21.910036   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 23:55:21.937119   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 23:55:21.963049   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 23:55:21.988496   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 23:55:22.013553   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0415 23:55:22.038484   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 23:55:22.064058   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 23:55:22.090954   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 23:55:22.115549   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 23:55:22.139971   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0415 23:55:22.164018   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0415 23:55:22.189031   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 23:55:22.207248   25488 ssh_runner.go:195] Run: openssl version
	I0415 23:55:22.214233   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 23:55:22.226285   25488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:55:22.230711   25488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:55:22.230772   25488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:55:22.236792   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 23:55:22.247943   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0415 23:55:22.261256   25488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0415 23:55:22.265792   25488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0415 23:55:22.265863   25488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0415 23:55:22.276630   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0415 23:55:22.290556   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0415 23:55:22.302350   25488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0415 23:55:22.307320   25488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0415 23:55:22.307397   25488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0415 23:55:22.313425   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 23:55:22.324867   25488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 23:55:22.329473   25488 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 23:55:22.329522   25488 kubeadm.go:391] StartCluster: {Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:55:22.329588   25488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0415 23:55:22.329635   25488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0415 23:55:22.367303   25488 cri.go:89] found id: ""
	I0415 23:55:22.367362   25488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 23:55:22.378138   25488 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 23:55:22.388429   25488 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 23:55:22.398727   25488 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 23:55:22.398750   25488 kubeadm.go:156] found existing configuration files:
	
	I0415 23:55:22.398780   25488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0415 23:55:22.409087   25488 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 23:55:22.409141   25488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 23:55:22.419943   25488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0415 23:55:22.430269   25488 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 23:55:22.430316   25488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 23:55:22.440881   25488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0415 23:55:22.451037   25488 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 23:55:22.451088   25488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 23:55:22.461551   25488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0415 23:55:22.471515   25488 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 23:55:22.471571   25488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 23:55:22.482055   25488 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0415 23:55:22.580974   25488 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0415 23:55:22.581134   25488 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 23:55:22.704124   25488 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 23:55:22.704210   25488 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 23:55:22.704285   25488 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 23:55:22.916668   25488 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 23:55:23.065315   25488 out.go:204]   - Generating certificates and keys ...
	I0415 23:55:23.065451   25488 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 23:55:23.065581   25488 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 23:55:23.109151   25488 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0415 23:55:23.236029   25488 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0415 23:55:23.645284   25488 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0415 23:55:23.764926   25488 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0415 23:55:23.891122   25488 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0415 23:55:23.891393   25488 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-694782 localhost] and IPs [192.168.39.41 127.0.0.1 ::1]
	I0415 23:55:23.983169   25488 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0415 23:55:23.983446   25488 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-694782 localhost] and IPs [192.168.39.41 127.0.0.1 ::1]
	I0415 23:55:24.307111   25488 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0415 23:55:24.400815   25488 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0415 23:55:24.576282   25488 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0415 23:55:24.576563   25488 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 23:55:24.700406   25488 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 23:55:24.804967   25488 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0415 23:55:24.985122   25488 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 23:55:25.159528   25488 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 23:55:25.264556   25488 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 23:55:25.265189   25488 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 23:55:25.267976   25488 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 23:55:25.269979   25488 out.go:204]   - Booting up control plane ...
	I0415 23:55:25.270080   25488 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 23:55:25.270205   25488 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 23:55:25.272050   25488 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 23:55:25.288032   25488 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 23:55:25.289004   25488 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 23:55:25.289052   25488 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 23:55:25.417306   25488 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 23:55:32.015872   25488 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.601888 seconds
	I0415 23:55:32.028464   25488 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 23:55:32.041239   25488 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 23:55:32.574913   25488 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 23:55:32.575096   25488 kubeadm.go:309] [mark-control-plane] Marking the node ha-694782 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 23:55:33.088760   25488 kubeadm.go:309] [bootstrap-token] Using token: yi105q.89mspfuqu9h3wwqy
	I0415 23:55:33.090154   25488 out.go:204]   - Configuring RBAC rules ...
	I0415 23:55:33.090267   25488 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 23:55:33.095046   25488 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 23:55:33.107365   25488 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 23:55:33.112085   25488 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 23:55:33.116134   25488 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 23:55:33.119289   25488 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 23:55:33.133705   25488 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 23:55:33.409805   25488 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 23:55:33.500389   25488 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 23:55:33.506955   25488 kubeadm.go:309] 
	I0415 23:55:33.507021   25488 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 23:55:33.507050   25488 kubeadm.go:309] 
	I0415 23:55:33.507137   25488 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 23:55:33.507147   25488 kubeadm.go:309] 
	I0415 23:55:33.507168   25488 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 23:55:33.507345   25488 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 23:55:33.507423   25488 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 23:55:33.507436   25488 kubeadm.go:309] 
	I0415 23:55:33.507537   25488 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 23:55:33.507575   25488 kubeadm.go:309] 
	I0415 23:55:33.507650   25488 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 23:55:33.507661   25488 kubeadm.go:309] 
	I0415 23:55:33.507742   25488 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 23:55:33.507861   25488 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 23:55:33.507960   25488 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 23:55:33.507976   25488 kubeadm.go:309] 
	I0415 23:55:33.508090   25488 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 23:55:33.508210   25488 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 23:55:33.508226   25488 kubeadm.go:309] 
	I0415 23:55:33.508348   25488 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token yi105q.89mspfuqu9h3wwqy \
	I0415 23:55:33.508520   25488 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0415 23:55:33.508556   25488 kubeadm.go:309] 	--control-plane 
	I0415 23:55:33.508569   25488 kubeadm.go:309] 
	I0415 23:55:33.508681   25488 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 23:55:33.508694   25488 kubeadm.go:309] 
	I0415 23:55:33.509634   25488 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token yi105q.89mspfuqu9h3wwqy \
	I0415 23:55:33.509736   25488 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0415 23:55:33.512618   25488 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 23:55:33.512783   25488 cni.go:84] Creating CNI manager for ""
	I0415 23:55:33.512798   25488 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 23:55:33.514405   25488 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0415 23:55:33.515657   25488 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0415 23:55:33.525529   25488 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0415 23:55:33.525551   25488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0415 23:55:33.571052   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0415 23:55:34.013192   25488 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 23:55:34.013271   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:34.013284   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-694782 minikube.k8s.io/updated_at=2024_04_15T23_55_34_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=ha-694782 minikube.k8s.io/primary=true
	I0415 23:55:34.137526   25488 ops.go:34] apiserver oom_adj: -16
	I0415 23:55:34.150039   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:34.650474   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:35.150065   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:35.651024   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:36.150487   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:36.650186   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:37.151071   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:37.650142   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:38.150779   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:38.651084   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:39.150734   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:39.650932   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:40.150234   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:40.650954   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:41.150314   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:41.650413   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:42.150322   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:42.650903   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:43.150869   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:43.650234   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:44.150706   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:44.650232   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:45.150496   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:45.650166   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:45.769665   25488 kubeadm.go:1107] duration metric: took 11.756461432s to wait for elevateKubeSystemPrivileges
	W0415 23:55:45.769708   25488 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 23:55:45.769716   25488 kubeadm.go:393] duration metric: took 23.440196109s to StartCluster
	I0415 23:55:45.769735   25488 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:55:45.769832   25488 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0415 23:55:45.770777   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:55:45.770999   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0415 23:55:45.771009   25488 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0415 23:55:45.771030   25488 start.go:240] waiting for startup goroutines ...
	I0415 23:55:45.771050   25488 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 23:55:45.771126   25488 addons.go:69] Setting storage-provisioner=true in profile "ha-694782"
	I0415 23:55:45.771136   25488 addons.go:69] Setting default-storageclass=true in profile "ha-694782"
	I0415 23:55:45.771171   25488 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-694782"
	I0415 23:55:45.771172   25488 addons.go:234] Setting addon storage-provisioner=true in "ha-694782"
	I0415 23:55:45.771282   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:55:45.771319   25488 host.go:66] Checking if "ha-694782" exists ...
	I0415 23:55:45.771665   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:55:45.771671   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:55:45.771691   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:55:45.771708   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:55:45.791793   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40765
	I0415 23:55:45.791916   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35107
	I0415 23:55:45.792241   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:55:45.792278   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:55:45.792783   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:55:45.792802   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:55:45.792906   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:55:45.792940   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:55:45.793151   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:55:45.793244   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:55:45.793435   25488 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0415 23:55:45.793818   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:55:45.793852   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:55:45.795550   25488 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0415 23:55:45.795808   25488 kapi.go:59] client config for ha-694782: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.crt", KeyFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.key", CAFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 23:55:45.796247   25488 cert_rotation.go:137] Starting client certificate rotation controller
	I0415 23:55:45.796415   25488 addons.go:234] Setting addon default-storageclass=true in "ha-694782"
	I0415 23:55:45.796466   25488 host.go:66] Checking if "ha-694782" exists ...
	I0415 23:55:45.796748   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:55:45.796777   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:55:45.808920   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43323
	I0415 23:55:45.809423   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:55:45.809893   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:55:45.809916   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:55:45.810234   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:55:45.810459   25488 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0415 23:55:45.811133   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44435
	I0415 23:55:45.811497   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:55:45.811979   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:55:45.811997   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:55:45.812277   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:55:45.812342   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:55:45.813905   25488 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 23:55:45.812842   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:55:45.815625   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:55:45.815744   25488 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 23:55:45.815765   25488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 23:55:45.815788   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:45.819266   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:45.819694   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:45.819724   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:45.819847   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:45.820014   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:45.820165   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:45.820329   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:55:45.830753   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I0415 23:55:45.831158   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:55:45.831586   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:55:45.831608   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:55:45.831948   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:55:45.832119   25488 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0415 23:55:45.833777   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:55:45.834049   25488 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 23:55:45.834068   25488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 23:55:45.834086   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:45.836607   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:45.837022   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:45.837047   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:45.837227   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:45.837425   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:45.837576   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:45.837708   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:55:45.943610   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0415 23:55:46.052142   25488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 23:55:46.077445   25488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 23:55:46.606986   25488 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0415 23:55:46.915986   25488 main.go:141] libmachine: Making call to close driver server
	I0415 23:55:46.916012   25488 main.go:141] libmachine: (ha-694782) Calling .Close
	I0415 23:55:46.915995   25488 main.go:141] libmachine: Making call to close driver server
	I0415 23:55:46.916065   25488 main.go:141] libmachine: (ha-694782) Calling .Close
	I0415 23:55:46.916297   25488 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:55:46.916318   25488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:55:46.916328   25488 main.go:141] libmachine: Making call to close driver server
	I0415 23:55:46.916336   25488 main.go:141] libmachine: (ha-694782) Calling .Close
	I0415 23:55:46.916303   25488 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:55:46.916364   25488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:55:46.916339   25488 main.go:141] libmachine: (ha-694782) DBG | Closing plugin on server side
	I0415 23:55:46.916376   25488 main.go:141] libmachine: Making call to close driver server
	I0415 23:55:46.916385   25488 main.go:141] libmachine: (ha-694782) Calling .Close
	I0415 23:55:46.916544   25488 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:55:46.916559   25488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:55:46.917639   25488 main.go:141] libmachine: (ha-694782) DBG | Closing plugin on server side
	I0415 23:55:46.917658   25488 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:55:46.917671   25488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:55:46.917772   25488 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0415 23:55:46.917785   25488 round_trippers.go:469] Request Headers:
	I0415 23:55:46.917796   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:55:46.917803   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:55:46.927026   25488 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0415 23:55:46.927833   25488 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0415 23:55:46.927852   25488 round_trippers.go:469] Request Headers:
	I0415 23:55:46.927863   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:55:46.927869   25488 round_trippers.go:473]     Content-Type: application/json
	I0415 23:55:46.927874   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:55:46.935690   25488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 23:55:46.935822   25488 main.go:141] libmachine: Making call to close driver server
	I0415 23:55:46.935856   25488 main.go:141] libmachine: (ha-694782) Calling .Close
	I0415 23:55:46.936139   25488 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:55:46.936158   25488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:55:46.936153   25488 main.go:141] libmachine: (ha-694782) DBG | Closing plugin on server side
	I0415 23:55:46.937979   25488 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0415 23:55:46.939310   25488 addons.go:505] duration metric: took 1.168263927s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0415 23:55:46.939350   25488 start.go:245] waiting for cluster config update ...
	I0415 23:55:46.939369   25488 start.go:254] writing updated cluster config ...
	I0415 23:55:46.941139   25488 out.go:177] 
	I0415 23:55:46.942803   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:55:46.942906   25488 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0415 23:55:46.944817   25488 out.go:177] * Starting "ha-694782-m02" control-plane node in "ha-694782" cluster
	I0415 23:55:46.946354   25488 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0415 23:55:46.946380   25488 cache.go:56] Caching tarball of preloaded images
	I0415 23:55:46.946465   25488 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0415 23:55:46.946480   25488 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0415 23:55:46.946572   25488 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0415 23:55:46.946766   25488 start.go:360] acquireMachinesLock for ha-694782-m02: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 23:55:46.946825   25488 start.go:364] duration metric: took 34.719µs to acquireMachinesLock for "ha-694782-m02"
	I0415 23:55:46.946852   25488 start.go:93] Provisioning new machine with config: &{Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0415 23:55:46.946951   25488 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0415 23:55:46.948709   25488 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 23:55:46.948795   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:55:46.948830   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:55:46.963384   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37699
	I0415 23:55:46.963856   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:55:46.964274   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:55:46.964298   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:55:46.964655   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:55:46.964834   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetMachineName
	I0415 23:55:46.964984   25488 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0415 23:55:46.965207   25488 start.go:159] libmachine.API.Create for "ha-694782" (driver="kvm2")
	I0415 23:55:46.965231   25488 client.go:168] LocalClient.Create starting
	I0415 23:55:46.965266   25488 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem
	I0415 23:55:46.965309   25488 main.go:141] libmachine: Decoding PEM data...
	I0415 23:55:46.965328   25488 main.go:141] libmachine: Parsing certificate...
	I0415 23:55:46.965412   25488 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem
	I0415 23:55:46.965438   25488 main.go:141] libmachine: Decoding PEM data...
	I0415 23:55:46.965455   25488 main.go:141] libmachine: Parsing certificate...
	I0415 23:55:46.965481   25488 main.go:141] libmachine: Running pre-create checks...
	I0415 23:55:46.965492   25488 main.go:141] libmachine: (ha-694782-m02) Calling .PreCreateCheck
	I0415 23:55:46.965652   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetConfigRaw
	I0415 23:55:46.966051   25488 main.go:141] libmachine: Creating machine...
	I0415 23:55:46.966067   25488 main.go:141] libmachine: (ha-694782-m02) Calling .Create
	I0415 23:55:46.966197   25488 main.go:141] libmachine: (ha-694782-m02) Creating KVM machine...
	I0415 23:55:46.967580   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found existing default KVM network
	I0415 23:55:46.967731   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found existing private KVM network mk-ha-694782
	I0415 23:55:46.967895   25488 main.go:141] libmachine: (ha-694782-m02) Setting up store path in /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02 ...
	I0415 23:55:46.967930   25488 main.go:141] libmachine: (ha-694782-m02) Building disk image from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso
	I0415 23:55:46.968003   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:46.967897   25867 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:55:46.968119   25488 main.go:141] libmachine: (ha-694782-m02) Downloading /home/jenkins/minikube-integration/18647-7542/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 23:55:47.182385   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:47.182278   25867 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa...
	I0415 23:55:47.311844   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:47.311702   25867 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/ha-694782-m02.rawdisk...
	I0415 23:55:47.311880   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Writing magic tar header
	I0415 23:55:47.311896   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Writing SSH key tar header
	I0415 23:55:47.311909   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:47.311847   25867 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02 ...
	I0415 23:55:47.312081   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02
	I0415 23:55:47.312102   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines
	I0415 23:55:47.312116   25488 main.go:141] libmachine: (ha-694782-m02) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02 (perms=drwx------)
	I0415 23:55:47.312140   25488 main.go:141] libmachine: (ha-694782-m02) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines (perms=drwxr-xr-x)
	I0415 23:55:47.312156   25488 main.go:141] libmachine: (ha-694782-m02) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube (perms=drwxr-xr-x)
	I0415 23:55:47.312174   25488 main.go:141] libmachine: (ha-694782-m02) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542 (perms=drwxrwxr-x)
	I0415 23:55:47.312193   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:55:47.312207   25488 main.go:141] libmachine: (ha-694782-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0415 23:55:47.312223   25488 main.go:141] libmachine: (ha-694782-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0415 23:55:47.312235   25488 main.go:141] libmachine: (ha-694782-m02) Creating domain...
	I0415 23:55:47.312253   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542
	I0415 23:55:47.312272   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0415 23:55:47.312298   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Checking permissions on dir: /home/jenkins
	I0415 23:55:47.312335   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Checking permissions on dir: /home
	I0415 23:55:47.312351   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Skipping /home - not owner
	I0415 23:55:47.313126   25488 main.go:141] libmachine: (ha-694782-m02) define libvirt domain using xml: 
	I0415 23:55:47.313153   25488 main.go:141] libmachine: (ha-694782-m02) <domain type='kvm'>
	I0415 23:55:47.313180   25488 main.go:141] libmachine: (ha-694782-m02)   <name>ha-694782-m02</name>
	I0415 23:55:47.313192   25488 main.go:141] libmachine: (ha-694782-m02)   <memory unit='MiB'>2200</memory>
	I0415 23:55:47.313204   25488 main.go:141] libmachine: (ha-694782-m02)   <vcpu>2</vcpu>
	I0415 23:55:47.313210   25488 main.go:141] libmachine: (ha-694782-m02)   <features>
	I0415 23:55:47.313221   25488 main.go:141] libmachine: (ha-694782-m02)     <acpi/>
	I0415 23:55:47.313231   25488 main.go:141] libmachine: (ha-694782-m02)     <apic/>
	I0415 23:55:47.313243   25488 main.go:141] libmachine: (ha-694782-m02)     <pae/>
	I0415 23:55:47.313252   25488 main.go:141] libmachine: (ha-694782-m02)     
	I0415 23:55:47.313265   25488 main.go:141] libmachine: (ha-694782-m02)   </features>
	I0415 23:55:47.313280   25488 main.go:141] libmachine: (ha-694782-m02)   <cpu mode='host-passthrough'>
	I0415 23:55:47.313305   25488 main.go:141] libmachine: (ha-694782-m02)   
	I0415 23:55:47.313317   25488 main.go:141] libmachine: (ha-694782-m02)   </cpu>
	I0415 23:55:47.313325   25488 main.go:141] libmachine: (ha-694782-m02)   <os>
	I0415 23:55:47.313336   25488 main.go:141] libmachine: (ha-694782-m02)     <type>hvm</type>
	I0415 23:55:47.313347   25488 main.go:141] libmachine: (ha-694782-m02)     <boot dev='cdrom'/>
	I0415 23:55:47.313360   25488 main.go:141] libmachine: (ha-694782-m02)     <boot dev='hd'/>
	I0415 23:55:47.313369   25488 main.go:141] libmachine: (ha-694782-m02)     <bootmenu enable='no'/>
	I0415 23:55:47.313378   25488 main.go:141] libmachine: (ha-694782-m02)   </os>
	I0415 23:55:47.313386   25488 main.go:141] libmachine: (ha-694782-m02)   <devices>
	I0415 23:55:47.313397   25488 main.go:141] libmachine: (ha-694782-m02)     <disk type='file' device='cdrom'>
	I0415 23:55:47.313414   25488 main.go:141] libmachine: (ha-694782-m02)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/boot2docker.iso'/>
	I0415 23:55:47.313425   25488 main.go:141] libmachine: (ha-694782-m02)       <target dev='hdc' bus='scsi'/>
	I0415 23:55:47.313449   25488 main.go:141] libmachine: (ha-694782-m02)       <readonly/>
	I0415 23:55:47.313467   25488 main.go:141] libmachine: (ha-694782-m02)     </disk>
	I0415 23:55:47.313496   25488 main.go:141] libmachine: (ha-694782-m02)     <disk type='file' device='disk'>
	I0415 23:55:47.313521   25488 main.go:141] libmachine: (ha-694782-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0415 23:55:47.313540   25488 main.go:141] libmachine: (ha-694782-m02)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/ha-694782-m02.rawdisk'/>
	I0415 23:55:47.313552   25488 main.go:141] libmachine: (ha-694782-m02)       <target dev='hda' bus='virtio'/>
	I0415 23:55:47.313565   25488 main.go:141] libmachine: (ha-694782-m02)     </disk>
	I0415 23:55:47.313577   25488 main.go:141] libmachine: (ha-694782-m02)     <interface type='network'>
	I0415 23:55:47.313591   25488 main.go:141] libmachine: (ha-694782-m02)       <source network='mk-ha-694782'/>
	I0415 23:55:47.313606   25488 main.go:141] libmachine: (ha-694782-m02)       <model type='virtio'/>
	I0415 23:55:47.313619   25488 main.go:141] libmachine: (ha-694782-m02)     </interface>
	I0415 23:55:47.313630   25488 main.go:141] libmachine: (ha-694782-m02)     <interface type='network'>
	I0415 23:55:47.313644   25488 main.go:141] libmachine: (ha-694782-m02)       <source network='default'/>
	I0415 23:55:47.313655   25488 main.go:141] libmachine: (ha-694782-m02)       <model type='virtio'/>
	I0415 23:55:47.313669   25488 main.go:141] libmachine: (ha-694782-m02)     </interface>
	I0415 23:55:47.313684   25488 main.go:141] libmachine: (ha-694782-m02)     <serial type='pty'>
	I0415 23:55:47.313698   25488 main.go:141] libmachine: (ha-694782-m02)       <target port='0'/>
	I0415 23:55:47.313708   25488 main.go:141] libmachine: (ha-694782-m02)     </serial>
	I0415 23:55:47.313725   25488 main.go:141] libmachine: (ha-694782-m02)     <console type='pty'>
	I0415 23:55:47.313737   25488 main.go:141] libmachine: (ha-694782-m02)       <target type='serial' port='0'/>
	I0415 23:55:47.313749   25488 main.go:141] libmachine: (ha-694782-m02)     </console>
	I0415 23:55:47.313764   25488 main.go:141] libmachine: (ha-694782-m02)     <rng model='virtio'>
	I0415 23:55:47.313778   25488 main.go:141] libmachine: (ha-694782-m02)       <backend model='random'>/dev/random</backend>
	I0415 23:55:47.313787   25488 main.go:141] libmachine: (ha-694782-m02)     </rng>
	I0415 23:55:47.313797   25488 main.go:141] libmachine: (ha-694782-m02)     
	I0415 23:55:47.313807   25488 main.go:141] libmachine: (ha-694782-m02)     
	I0415 23:55:47.313816   25488 main.go:141] libmachine: (ha-694782-m02)   </devices>
	I0415 23:55:47.313827   25488 main.go:141] libmachine: (ha-694782-m02) </domain>
	I0415 23:55:47.313837   25488 main.go:141] libmachine: (ha-694782-m02) 
	I0415 23:55:47.320532   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:46:5c:22 in network default
	I0415 23:55:47.321104   25488 main.go:141] libmachine: (ha-694782-m02) Ensuring networks are active...
	I0415 23:55:47.321126   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:47.321879   25488 main.go:141] libmachine: (ha-694782-m02) Ensuring network default is active
	I0415 23:55:47.322191   25488 main.go:141] libmachine: (ha-694782-m02) Ensuring network mk-ha-694782 is active
	I0415 23:55:47.322531   25488 main.go:141] libmachine: (ha-694782-m02) Getting domain xml...
	I0415 23:55:47.323224   25488 main.go:141] libmachine: (ha-694782-m02) Creating domain...
	I0415 23:55:48.527079   25488 main.go:141] libmachine: (ha-694782-m02) Waiting to get IP...
	I0415 23:55:48.527975   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:48.528406   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:48.528454   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:48.528385   25867 retry.go:31] will retry after 193.593289ms: waiting for machine to come up
	I0415 23:55:48.723860   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:48.724293   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:48.724322   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:48.724246   25867 retry.go:31] will retry after 318.142991ms: waiting for machine to come up
	I0415 23:55:49.043718   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:49.044212   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:49.044246   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:49.044160   25867 retry.go:31] will retry after 317.519425ms: waiting for machine to come up
	I0415 23:55:49.363740   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:49.364162   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:49.364190   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:49.364128   25867 retry.go:31] will retry after 499.917098ms: waiting for machine to come up
	I0415 23:55:49.865951   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:49.866421   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:49.866457   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:49.866376   25867 retry.go:31] will retry after 528.145662ms: waiting for machine to come up
	I0415 23:55:50.397290   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:50.397725   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:50.397748   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:50.397678   25867 retry.go:31] will retry after 814.440825ms: waiting for machine to come up
	I0415 23:55:51.213197   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:51.213666   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:51.213699   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:51.213609   25867 retry.go:31] will retry after 1.179244943s: waiting for machine to come up
	I0415 23:55:52.394177   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:52.394631   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:52.394659   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:52.394599   25867 retry.go:31] will retry after 898.22342ms: waiting for machine to come up
	I0415 23:55:53.294395   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:53.294869   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:53.294886   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:53.294828   25867 retry.go:31] will retry after 1.437791451s: waiting for machine to come up
	I0415 23:55:54.734352   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:54.734808   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:54.734836   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:54.734768   25867 retry.go:31] will retry after 1.739624525s: waiting for machine to come up
	I0415 23:55:56.475588   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:56.475989   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:56.476012   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:56.475949   25867 retry.go:31] will retry after 2.659330494s: waiting for machine to come up
	I0415 23:55:59.137388   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:59.137822   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:59.137850   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:59.137783   25867 retry.go:31] will retry after 3.160909712s: waiting for machine to come up
	I0415 23:56:02.299883   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:02.300261   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:56:02.300290   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:56:02.300217   25867 retry.go:31] will retry after 4.421664688s: waiting for machine to come up
	I0415 23:56:06.726660   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:06.727082   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:56:06.727103   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:56:06.727039   25867 retry.go:31] will retry after 3.674569121s: waiting for machine to come up
	I0415 23:56:10.405303   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.405819   25488 main.go:141] libmachine: (ha-694782-m02) Found IP for machine: 192.168.39.42
	I0415 23:56:10.405840   25488 main.go:141] libmachine: (ha-694782-m02) Reserving static IP address...
	I0415 23:56:10.405852   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has current primary IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.406216   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find host DHCP lease matching {name: "ha-694782-m02", mac: "52:54:00:70:e2:c3", ip: "192.168.39.42"} in network mk-ha-694782
	I0415 23:56:10.475372   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Getting to WaitForSSH function...
	I0415 23:56:10.475398   25488 main.go:141] libmachine: (ha-694782-m02) Reserved static IP address: 192.168.39.42
	I0415 23:56:10.475417   25488 main.go:141] libmachine: (ha-694782-m02) Waiting for SSH to be available...
	I0415 23:56:10.477891   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.478298   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:minikube Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:10.478336   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.478415   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Using SSH client type: external
	I0415 23:56:10.478446   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa (-rw-------)
	I0415 23:56:10.478488   25488 main.go:141] libmachine: (ha-694782-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.42 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0415 23:56:10.478499   25488 main.go:141] libmachine: (ha-694782-m02) DBG | About to run SSH command:
	I0415 23:56:10.478509   25488 main.go:141] libmachine: (ha-694782-m02) DBG | exit 0
	I0415 23:56:10.609257   25488 main.go:141] libmachine: (ha-694782-m02) DBG | SSH cmd err, output: <nil>: 
	I0415 23:56:10.609562   25488 main.go:141] libmachine: (ha-694782-m02) KVM machine creation complete!
	I0415 23:56:10.609872   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetConfigRaw
	I0415 23:56:10.610356   25488 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0415 23:56:10.610558   25488 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0415 23:56:10.610818   25488 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0415 23:56:10.610838   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetState
	I0415 23:56:10.612022   25488 main.go:141] libmachine: Detecting operating system of created instance...
	I0415 23:56:10.612036   25488 main.go:141] libmachine: Waiting for SSH to be available...
	I0415 23:56:10.612041   25488 main.go:141] libmachine: Getting to WaitForSSH function...
	I0415 23:56:10.612046   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:10.614589   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.614941   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:10.614962   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.615077   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:10.615263   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:10.615431   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:10.615643   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:10.615800   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:56:10.616034   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I0415 23:56:10.616046   25488 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0415 23:56:10.728532   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 23:56:10.728560   25488 main.go:141] libmachine: Detecting the provisioner...
	I0415 23:56:10.728571   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:10.731209   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.731545   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:10.731572   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.731749   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:10.731917   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:10.732090   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:10.732218   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:10.732394   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:56:10.732556   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I0415 23:56:10.732567   25488 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0415 23:56:10.845527   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0415 23:56:10.845619   25488 main.go:141] libmachine: found compatible host: buildroot
	I0415 23:56:10.845630   25488 main.go:141] libmachine: Provisioning with buildroot...
	I0415 23:56:10.845639   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetMachineName
	I0415 23:56:10.845864   25488 buildroot.go:166] provisioning hostname "ha-694782-m02"
	I0415 23:56:10.845889   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetMachineName
	I0415 23:56:10.846065   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:10.848602   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.848973   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:10.848997   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.849171   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:10.849337   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:10.849524   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:10.849661   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:10.849812   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:56:10.849998   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I0415 23:56:10.850014   25488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-694782-m02 && echo "ha-694782-m02" | sudo tee /etc/hostname
	I0415 23:56:10.975678   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-694782-m02
	
	I0415 23:56:10.975708   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:10.978348   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.978637   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:10.978659   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.978867   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:10.979058   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:10.979231   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:10.979356   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:10.979495   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:56:10.979652   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I0415 23:56:10.979668   25488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-694782-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-694782-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-694782-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 23:56:11.102055   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 23:56:11.102095   25488 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0415 23:56:11.102122   25488 buildroot.go:174] setting up certificates
	I0415 23:56:11.102134   25488 provision.go:84] configureAuth start
	I0415 23:56:11.102154   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetMachineName
	I0415 23:56:11.102408   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetIP
	I0415 23:56:11.104527   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.104897   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.104926   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.105051   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:11.107090   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.107380   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.107410   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.107559   25488 provision.go:143] copyHostCerts
	I0415 23:56:11.107583   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0415 23:56:11.107620   25488 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0415 23:56:11.107632   25488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0415 23:56:11.107720   25488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0415 23:56:11.107800   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0415 23:56:11.107828   25488 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0415 23:56:11.107842   25488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0415 23:56:11.107871   25488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0415 23:56:11.107916   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0415 23:56:11.107932   25488 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0415 23:56:11.107938   25488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0415 23:56:11.107958   25488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0415 23:56:11.108003   25488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.ha-694782-m02 san=[127.0.0.1 192.168.39.42 ha-694782-m02 localhost minikube]
	I0415 23:56:11.232790   25488 provision.go:177] copyRemoteCerts
	I0415 23:56:11.232852   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 23:56:11.232878   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:11.235484   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.235814   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.235845   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.236089   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:11.236280   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:11.236442   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:11.236566   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa Username:docker}
	I0415 23:56:11.323731   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0415 23:56:11.323786   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 23:56:11.352534   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0415 23:56:11.352600   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0415 23:56:11.378051   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0415 23:56:11.378103   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 23:56:11.402829   25488 provision.go:87] duration metric: took 300.678289ms to configureAuth
	I0415 23:56:11.402859   25488 buildroot.go:189] setting minikube options for container-runtime
	I0415 23:56:11.403049   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:56:11.403116   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:11.405743   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.406136   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.406155   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.406414   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:11.406588   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:11.406756   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:11.406891   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:11.407043   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:56:11.407236   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I0415 23:56:11.407257   25488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0415 23:56:11.677645   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0415 23:56:11.677674   25488 main.go:141] libmachine: Checking connection to Docker...
	I0415 23:56:11.677684   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetURL
	I0415 23:56:11.678899   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Using libvirt version 6000000
	I0415 23:56:11.681174   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.681528   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.681561   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.681718   25488 main.go:141] libmachine: Docker is up and running!
	I0415 23:56:11.681731   25488 main.go:141] libmachine: Reticulating splines...
	I0415 23:56:11.681738   25488 client.go:171] duration metric: took 24.716500263s to LocalClient.Create
	I0415 23:56:11.681758   25488 start.go:167] duration metric: took 24.716551938s to libmachine.API.Create "ha-694782"
	I0415 23:56:11.681770   25488 start.go:293] postStartSetup for "ha-694782-m02" (driver="kvm2")
	I0415 23:56:11.681783   25488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 23:56:11.681817   25488 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0415 23:56:11.682041   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 23:56:11.682063   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:11.684101   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.684399   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.684429   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.684525   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:11.684707   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:11.684885   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:11.685039   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa Username:docker}
	I0415 23:56:11.771783   25488 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 23:56:11.776091   25488 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 23:56:11.776115   25488 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0415 23:56:11.776185   25488 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0415 23:56:11.776252   25488 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0415 23:56:11.776262   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /etc/ssl/certs/148972.pem
	I0415 23:56:11.776340   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 23:56:11.785585   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0415 23:56:11.809617   25488 start.go:296] duration metric: took 127.83471ms for postStartSetup
	I0415 23:56:11.809670   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetConfigRaw
	I0415 23:56:11.810165   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetIP
	I0415 23:56:11.812618   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.813005   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.813033   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.813279   25488 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0415 23:56:11.813453   25488 start.go:128] duration metric: took 24.866488081s to createHost
	I0415 23:56:11.813475   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:11.815844   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.816169   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.816189   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.816311   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:11.816472   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:11.816606   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:11.816743   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:11.816901   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:56:11.817051   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I0415 23:56:11.817061   25488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 23:56:11.929852   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713225371.905156578
	
	I0415 23:56:11.929877   25488 fix.go:216] guest clock: 1713225371.905156578
	I0415 23:56:11.929884   25488 fix.go:229] Guest: 2024-04-15 23:56:11.905156578 +0000 UTC Remote: 2024-04-15 23:56:11.813463577 +0000 UTC m=+81.253097902 (delta=91.693001ms)
	I0415 23:56:11.929898   25488 fix.go:200] guest clock delta is within tolerance: 91.693001ms
	I0415 23:56:11.929904   25488 start.go:83] releasing machines lock for "ha-694782-m02", held for 24.983068056s
	I0415 23:56:11.929922   25488 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0415 23:56:11.930199   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetIP
	I0415 23:56:11.932528   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.932893   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.932923   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.935194   25488 out.go:177] * Found network options:
	I0415 23:56:11.936606   25488 out.go:177]   - NO_PROXY=192.168.39.41
	W0415 23:56:11.938073   25488 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 23:56:11.938113   25488 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0415 23:56:11.938600   25488 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0415 23:56:11.938786   25488 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0415 23:56:11.938874   25488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 23:56:11.938913   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	W0415 23:56:11.938979   25488 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 23:56:11.939050   25488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0415 23:56:11.939070   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:11.941656   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.941894   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.942069   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.942095   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.942208   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:11.942270   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.942308   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.942338   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:11.942401   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:11.942501   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:11.942527   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:11.942636   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa Username:docker}
	I0415 23:56:11.942653   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:11.942776   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa Username:docker}
	I0415 23:56:12.180601   25488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 23:56:12.186658   25488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 23:56:12.186723   25488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 23:56:12.202688   25488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 23:56:12.202712   25488 start.go:494] detecting cgroup driver to use...
	I0415 23:56:12.202777   25488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 23:56:12.218887   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 23:56:12.231989   25488 docker.go:217] disabling cri-docker service (if available) ...
	I0415 23:56:12.232046   25488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0415 23:56:12.244782   25488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0415 23:56:12.257890   25488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0415 23:56:12.369621   25488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0415 23:56:12.507488   25488 docker.go:233] disabling docker service ...
	I0415 23:56:12.507550   25488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0415 23:56:12.522595   25488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0415 23:56:12.535067   25488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0415 23:56:12.676201   25488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0415 23:56:12.791814   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0415 23:56:12.805759   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 23:56:12.823846   25488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0415 23:56:12.823906   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:56:12.833736   25488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0415 23:56:12.833789   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:56:12.843597   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:56:12.853281   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:56:12.863034   25488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 23:56:12.873083   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:56:12.883237   25488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:56:12.902220   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:56:12.912388   25488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 23:56:12.921104   25488 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0415 23:56:12.921140   25488 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0415 23:56:12.933837   25488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 23:56:12.942715   25488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:56:13.056576   25488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0415 23:56:13.200204   25488 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0415 23:56:13.200283   25488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0415 23:56:13.205172   25488 start.go:562] Will wait 60s for crictl version
	I0415 23:56:13.205245   25488 ssh_runner.go:195] Run: which crictl
	I0415 23:56:13.208916   25488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 23:56:13.244868   25488 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0415 23:56:13.244951   25488 ssh_runner.go:195] Run: crio --version
	I0415 23:56:13.273244   25488 ssh_runner.go:195] Run: crio --version
	I0415 23:56:13.303556   25488 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0415 23:56:13.304864   25488 out.go:177]   - env NO_PROXY=192.168.39.41
	I0415 23:56:13.305992   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetIP
	I0415 23:56:13.308329   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:13.308655   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:13.308683   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:13.308917   25488 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0415 23:56:13.312854   25488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 23:56:13.325299   25488 mustload.go:65] Loading cluster: ha-694782
	I0415 23:56:13.325510   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:56:13.325778   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:56:13.325811   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:56:13.339936   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32957
	I0415 23:56:13.340293   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:56:13.340727   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:56:13.340750   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:56:13.341110   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:56:13.341336   25488 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0415 23:56:13.342734   25488 host.go:66] Checking if "ha-694782" exists ...
	I0415 23:56:13.342992   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:56:13.343012   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:56:13.357709   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35187
	I0415 23:56:13.358056   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:56:13.358465   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:56:13.358489   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:56:13.358747   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:56:13.358941   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:56:13.359083   25488 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782 for IP: 192.168.39.42
	I0415 23:56:13.359096   25488 certs.go:194] generating shared ca certs ...
	I0415 23:56:13.359113   25488 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:56:13.359349   25488 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0415 23:56:13.359407   25488 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0415 23:56:13.359424   25488 certs.go:256] generating profile certs ...
	I0415 23:56:13.359515   25488 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.key
	I0415 23:56:13.359547   25488 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.eac2dea4
	I0415 23:56:13.359567   25488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.eac2dea4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.41 192.168.39.42 192.168.39.254]
	I0415 23:56:13.671903   25488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.eac2dea4 ...
	I0415 23:56:13.671935   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.eac2dea4: {Name:mkb8f3772d37649eb83259789cddf0c58e9658b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:56:13.672147   25488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.eac2dea4 ...
	I0415 23:56:13.672165   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.eac2dea4: {Name:mk7c945bc98ba7b6cb8f65afcf41b8988e1e2ddd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:56:13.672269   25488 certs.go:381] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.eac2dea4 -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt
	I0415 23:56:13.672433   25488 certs.go:385] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.eac2dea4 -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key
	I0415 23:56:13.672601   25488 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key
	I0415 23:56:13.672621   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 23:56:13.672638   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0415 23:56:13.672657   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 23:56:13.672675   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 23:56:13.672692   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0415 23:56:13.672706   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0415 23:56:13.672722   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0415 23:56:13.672741   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0415 23:56:13.672802   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0415 23:56:13.672838   25488 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0415 23:56:13.672851   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0415 23:56:13.672882   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0415 23:56:13.672911   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0415 23:56:13.672947   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0415 23:56:13.673000   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0415 23:56:13.673042   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:56:13.673065   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem -> /usr/share/ca-certificates/14897.pem
	I0415 23:56:13.673083   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /usr/share/ca-certificates/148972.pem
	I0415 23:56:13.673119   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:56:13.675829   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:56:13.676188   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:56:13.676204   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:56:13.676351   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:56:13.676547   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:56:13.676726   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:56:13.676855   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:56:13.753542   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0415 23:56:13.758063   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0415 23:56:13.770208   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0415 23:56:13.775034   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0415 23:56:13.786570   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0415 23:56:13.790842   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0415 23:56:13.803690   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0415 23:56:13.813487   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0415 23:56:13.826958   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0415 23:56:13.831593   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0415 23:56:13.844171   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0415 23:56:13.848526   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0415 23:56:13.859260   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 23:56:13.885184   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 23:56:13.910149   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 23:56:13.935698   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 23:56:13.960891   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0415 23:56:13.986032   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 23:56:14.010622   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 23:56:14.035423   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 23:56:14.059988   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 23:56:14.084544   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0415 23:56:14.109094   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0415 23:56:14.134203   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0415 23:56:14.150413   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0415 23:56:14.166922   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0415 23:56:14.183022   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0415 23:56:14.199437   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0415 23:56:14.215963   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0415 23:56:14.232699   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0415 23:56:14.249924   25488 ssh_runner.go:195] Run: openssl version
	I0415 23:56:14.255645   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 23:56:14.266461   25488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:56:14.270932   25488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:56:14.270975   25488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:56:14.276546   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 23:56:14.287148   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0415 23:56:14.297863   25488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0415 23:56:14.302551   25488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0415 23:56:14.302605   25488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0415 23:56:14.308677   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0415 23:56:14.319390   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0415 23:56:14.330840   25488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0415 23:56:14.335266   25488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0415 23:56:14.335315   25488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0415 23:56:14.340870   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 23:56:14.351201   25488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 23:56:14.355232   25488 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 23:56:14.355283   25488 kubeadm.go:928] updating node {m02 192.168.39.42 8443 v1.29.3 crio true true} ...
	I0415 23:56:14.355373   25488 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-694782-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.42
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 23:56:14.355397   25488 kube-vip.go:111] generating kube-vip config ...
	I0415 23:56:14.355424   25488 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0415 23:56:14.371160   25488 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0415 23:56:14.371222   25488 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0415 23:56:14.371312   25488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 23:56:14.381129   25488 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0415 23:56:14.381198   25488 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0415 23:56:14.390899   25488 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0415 23:56:14.390923   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0415 23:56:14.390977   25488 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubelet
	I0415 23:56:14.390997   25488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0415 23:56:14.390977   25488 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubeadm
	I0415 23:56:14.395687   25488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0415 23:56:14.395712   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0415 23:56:15.646651   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0415 23:56:15.646748   25488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0415 23:56:15.651802   25488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0415 23:56:15.651829   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0415 23:56:24.666254   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 23:56:24.680857   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0415 23:56:24.680952   25488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0415 23:56:24.685370   25488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0415 23:56:24.685394   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0415 23:56:25.131433   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0415 23:56:25.142455   25488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0415 23:56:25.160036   25488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 23:56:25.176926   25488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0415 23:56:25.193554   25488 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0415 23:56:25.197376   25488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 23:56:25.209283   25488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:56:25.323085   25488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 23:56:25.340836   25488 host.go:66] Checking if "ha-694782" exists ...
	I0415 23:56:25.341298   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:56:25.341336   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:56:25.355603   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42315
	I0415 23:56:25.356125   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:56:25.356572   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:56:25.356596   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:56:25.356889   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:56:25.357063   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:56:25.357198   25488 start.go:316] joinCluster: &{Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.42 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:56:25.357329   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0415 23:56:25.357348   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:56:25.360205   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:56:25.360569   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:56:25.360595   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:56:25.360771   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:56:25.360943   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:56:25.361112   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:56:25.361286   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:56:25.510038   25488 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.42 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0415 23:56:25.510089   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gw4duo.gci5l7kerx1vz1u3 --discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-694782-m02 --control-plane --apiserver-advertise-address=192.168.39.42 --apiserver-bind-port=8443"
	I0415 23:56:49.334916   25488 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gw4duo.gci5l7kerx1vz1u3 --discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-694782-m02 --control-plane --apiserver-advertise-address=192.168.39.42 --apiserver-bind-port=8443": (23.824798819s)
	I0415 23:56:49.334953   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0415 23:56:49.773817   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-694782-m02 minikube.k8s.io/updated_at=2024_04_15T23_56_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=ha-694782 minikube.k8s.io/primary=false
	I0415 23:56:49.900348   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-694782-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0415 23:56:50.025054   25488 start.go:318] duration metric: took 24.667851652s to joinCluster
	I0415 23:56:50.025180   25488 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.42 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0415 23:56:50.027041   25488 out.go:177] * Verifying Kubernetes components...
	I0415 23:56:50.025434   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:56:50.028608   25488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:56:50.240439   25488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 23:56:50.265586   25488 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0415 23:56:50.265924   25488 kapi.go:59] client config for ha-694782: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.crt", KeyFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.key", CAFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0415 23:56:50.266006   25488 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.41:8443
	I0415 23:56:50.266237   25488 node_ready.go:35] waiting up to 6m0s for node "ha-694782-m02" to be "Ready" ...
	I0415 23:56:50.266313   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:50.266321   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:50.266336   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:50.266342   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:50.275188   25488 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0415 23:56:50.766607   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:50.766627   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:50.766638   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:50.766646   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:50.769889   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:51.266758   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:51.266785   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:51.266798   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:51.266804   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:51.270007   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:51.767172   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:51.767193   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:51.767199   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:51.767203   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:51.770728   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:52.267388   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:52.267407   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:52.267414   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:52.267419   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:52.404598   25488 round_trippers.go:574] Response Status: 200 OK in 137 milliseconds
	I0415 23:56:52.405480   25488 node_ready.go:53] node "ha-694782-m02" has status "Ready":"False"
	I0415 23:56:52.766528   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:52.766547   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:52.766556   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:52.766561   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:52.770145   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:53.266434   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:53.266455   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:53.266462   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:53.266466   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:53.269830   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:53.766617   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:53.766643   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:53.766653   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:53.766660   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:53.770833   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:56:54.267159   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:54.267184   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:54.267196   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:54.267202   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:54.270814   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:54.766701   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:54.766723   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:54.766731   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:54.766734   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:54.770173   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:54.771010   25488 node_ready.go:53] node "ha-694782-m02" has status "Ready":"False"
	I0415 23:56:55.267469   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:55.267494   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.267503   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.267508   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.274024   25488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 23:56:55.767460   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:55.767481   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.767489   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.767494   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.771175   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:55.772261   25488 node_ready.go:49] node "ha-694782-m02" has status "Ready":"True"
	I0415 23:56:55.772284   25488 node_ready.go:38] duration metric: took 5.506020993s for node "ha-694782-m02" to be "Ready" ...
	I0415 23:56:55.772295   25488 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 23:56:55.772371   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:56:55.772379   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.772392   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.772398   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.776979   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:56:55.784234   25488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4sgv4" in "kube-system" namespace to be "Ready" ...
	I0415 23:56:55.784296   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-4sgv4
	I0415 23:56:55.784304   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.784311   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.784315   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.787768   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:55.788385   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:56:55.788404   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.788411   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.788414   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.791258   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:56:55.792423   25488 pod_ready.go:92] pod "coredns-76f75df574-4sgv4" in "kube-system" namespace has status "Ready":"True"
	I0415 23:56:55.792438   25488 pod_ready.go:81] duration metric: took 8.183667ms for pod "coredns-76f75df574-4sgv4" in "kube-system" namespace to be "Ready" ...
	I0415 23:56:55.792445   25488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-zdc8q" in "kube-system" namespace to be "Ready" ...
	I0415 23:56:55.792482   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-zdc8q
	I0415 23:56:55.792490   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.792496   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.792501   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.796007   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:55.797171   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:56:55.797188   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.797198   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.797203   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.799496   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:56:55.800318   25488 pod_ready.go:92] pod "coredns-76f75df574-zdc8q" in "kube-system" namespace has status "Ready":"True"
	I0415 23:56:55.800332   25488 pod_ready.go:81] duration metric: took 7.88168ms for pod "coredns-76f75df574-zdc8q" in "kube-system" namespace to be "Ready" ...
	I0415 23:56:55.800339   25488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:56:55.800377   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782
	I0415 23:56:55.800385   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.800396   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.800402   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.805420   25488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 23:56:55.806408   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:56:55.806425   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.806433   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.806440   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.808909   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:56:55.810095   25488 pod_ready.go:92] pod "etcd-ha-694782" in "kube-system" namespace has status "Ready":"True"
	I0415 23:56:55.810112   25488 pod_ready.go:81] duration metric: took 9.767749ms for pod "etcd-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:56:55.810120   25488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:56:55.810156   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:56:55.810164   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.810170   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.810173   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.813832   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:55.815146   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:55.815160   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.815167   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.815170   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.817272   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:56:56.310486   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:56:56.310514   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:56.310524   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:56.310527   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:56.315237   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:56:56.316129   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:56.316145   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:56.316154   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:56.316158   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:56.319007   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:56:56.810976   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:56:56.810999   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:56.811007   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:56.811010   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:56.814948   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:56.815535   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:56.815550   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:56.815558   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:56.815562   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:56.818582   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:57.311301   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:56:57.311320   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:57.311327   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:57.311331   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:57.315014   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:57.315753   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:57.315771   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:57.315781   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:57.315787   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:57.318516   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:56:57.810461   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:56:57.810483   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:57.810492   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:57.810496   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:57.814346   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:57.815290   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:57.815304   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:57.815311   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:57.815315   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:57.818258   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:56:57.818866   25488 pod_ready.go:102] pod "etcd-ha-694782-m02" in "kube-system" namespace has status "Ready":"False"
	I0415 23:56:58.310239   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:56:58.310280   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:58.310294   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:58.310299   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:58.313871   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:58.314647   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:58.314661   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:58.314669   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:58.314672   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:58.317213   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:56:58.810971   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:56:58.810993   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:58.811001   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:58.811006   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:58.815019   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:56:58.815983   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:58.815997   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:58.816004   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:58.816007   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:58.819401   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:59.310354   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:56:59.310382   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:59.310390   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:59.310394   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:59.313960   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:59.314991   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:59.315005   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:59.315012   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:59.315016   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:59.317744   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:56:59.810723   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:56:59.810745   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:59.810752   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:59.810757   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:59.814166   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:59.815227   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:59.815243   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:59.815251   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:59.815255   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:59.817956   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:00.311274   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:57:00.311300   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:00.311306   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:00.311315   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:00.315073   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:00.316117   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:00.316133   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:00.316140   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:00.316145   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:00.319647   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:00.320258   25488 pod_ready.go:92] pod "etcd-ha-694782-m02" in "kube-system" namespace has status "Ready":"True"
	I0415 23:57:00.320281   25488 pod_ready.go:81] duration metric: took 4.510154612s for pod "etcd-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:00.320296   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:00.320340   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782
	I0415 23:57:00.320349   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:00.320355   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:00.320358   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:00.324311   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:00.325123   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:57:00.325137   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:00.325144   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:00.325148   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:00.327966   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:00.329201   25488 pod_ready.go:92] pod "kube-apiserver-ha-694782" in "kube-system" namespace has status "Ready":"True"
	I0415 23:57:00.329220   25488 pod_ready.go:81] duration metric: took 8.917684ms for pod "kube-apiserver-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:00.329228   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:00.329287   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:57:00.329294   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:00.329301   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:00.329307   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:00.332295   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:00.333015   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:00.333033   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:00.333043   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:00.333047   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:00.335204   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:00.829381   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:57:00.829404   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:00.829414   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:00.829421   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:00.832994   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:00.833835   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:00.833849   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:00.833856   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:00.833860   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:00.836706   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:01.329499   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:57:01.329527   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:01.329537   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:01.329545   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:01.334044   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:57:01.334877   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:01.334890   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:01.334896   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:01.334900   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:01.340333   25488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 23:57:01.830312   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:57:01.830338   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:01.830351   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:01.830356   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:01.833865   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:01.834669   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:01.834685   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:01.834696   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:01.834701   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:01.837410   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:02.330058   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:57:02.330077   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:02.330085   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:02.330090   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:02.334946   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:57:02.335738   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:02.335754   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:02.335761   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:02.335765   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:02.338366   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:02.338908   25488 pod_ready.go:102] pod "kube-apiserver-ha-694782-m02" in "kube-system" namespace has status "Ready":"False"
	I0415 23:57:02.830355   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:57:02.830374   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:02.830383   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:02.830392   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:02.834192   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:02.835310   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:02.835335   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:02.835347   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:02.835352   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:02.838434   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:03.330373   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:57:03.330395   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:03.330405   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:03.330410   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:03.333645   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:03.334629   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:03.334646   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:03.334652   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:03.334657   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:03.337037   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:03.830060   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:57:03.830083   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:03.830091   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:03.830097   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:03.833490   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:03.834457   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:03.834470   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:03.834478   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:03.834481   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:03.836956   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:04.330399   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:57:04.330419   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.330426   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.330429   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.333977   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:04.334648   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:04.334661   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.334669   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.334675   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.337276   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:04.338013   25488 pod_ready.go:92] pod "kube-apiserver-ha-694782-m02" in "kube-system" namespace has status "Ready":"True"
	I0415 23:57:04.338033   25488 pod_ready.go:81] duration metric: took 4.008796828s for pod "kube-apiserver-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:04.338042   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:04.338106   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-694782
	I0415 23:57:04.338119   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.338130   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.338135   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.340481   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:04.341175   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:57:04.341189   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.341198   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.341202   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.343938   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:04.344574   25488 pod_ready.go:92] pod "kube-controller-manager-ha-694782" in "kube-system" namespace has status "Ready":"True"
	I0415 23:57:04.344592   25488 pod_ready.go:81] duration metric: took 6.545072ms for pod "kube-controller-manager-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:04.344603   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:04.344660   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-694782-m02
	I0415 23:57:04.344671   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.344678   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.344682   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.347285   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:04.348047   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:04.348061   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.348067   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.348072   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.350324   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:04.350742   25488 pod_ready.go:92] pod "kube-controller-manager-ha-694782-m02" in "kube-system" namespace has status "Ready":"True"
	I0415 23:57:04.350770   25488 pod_ready.go:81] duration metric: took 6.15785ms for pod "kube-controller-manager-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:04.350782   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d46v5" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:04.368041   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d46v5
	I0415 23:57:04.368053   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.368065   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.368086   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.370682   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:04.567535   25488 request.go:629] Waited for 196.288946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:57:04.567588   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:57:04.567595   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.567610   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.567616   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.571876   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:57:04.572308   25488 pod_ready.go:92] pod "kube-proxy-d46v5" in "kube-system" namespace has status "Ready":"True"
	I0415 23:57:04.572327   25488 pod_ready.go:81] duration metric: took 221.53309ms for pod "kube-proxy-d46v5" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:04.572339   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vbfhn" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:04.767697   25488 request.go:629] Waited for 195.299186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbfhn
	I0415 23:57:04.767760   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbfhn
	I0415 23:57:04.767775   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.767785   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.767790   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.771226   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:04.967957   25488 request.go:629] Waited for 196.1342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:04.968006   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:04.968019   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.968037   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.968044   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.970570   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:04.971464   25488 pod_ready.go:92] pod "kube-proxy-vbfhn" in "kube-system" namespace has status "Ready":"True"
	I0415 23:57:04.971485   25488 pod_ready.go:81] duration metric: took 399.134854ms for pod "kube-proxy-vbfhn" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:04.971499   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:05.167564   25488 request.go:629] Waited for 195.977611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782
	I0415 23:57:05.167612   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782
	I0415 23:57:05.167617   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:05.167624   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:05.167627   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:05.171033   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:05.368180   25488 request.go:629] Waited for 196.342051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:57:05.368258   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:57:05.368269   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:05.368279   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:05.368288   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:05.372149   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:05.372759   25488 pod_ready.go:92] pod "kube-scheduler-ha-694782" in "kube-system" namespace has status "Ready":"True"
	I0415 23:57:05.372778   25488 pod_ready.go:81] duration metric: took 401.26753ms for pod "kube-scheduler-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:05.372790   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:05.567828   25488 request.go:629] Waited for 194.975559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782-m02
	I0415 23:57:05.567877   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782-m02
	I0415 23:57:05.567881   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:05.567893   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:05.567897   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:05.571055   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:05.768261   25488 request.go:629] Waited for 196.578908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:05.768307   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:05.768312   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:05.768319   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:05.768324   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:05.771650   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:05.772382   25488 pod_ready.go:92] pod "kube-scheduler-ha-694782-m02" in "kube-system" namespace has status "Ready":"True"
	I0415 23:57:05.772401   25488 pod_ready.go:81] duration metric: took 399.603988ms for pod "kube-scheduler-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:05.772412   25488 pod_ready.go:38] duration metric: took 10.000087746s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 23:57:05.772431   25488 api_server.go:52] waiting for apiserver process to appear ...
	I0415 23:57:05.772492   25488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 23:57:05.787625   25488 api_server.go:72] duration metric: took 15.762408082s to wait for apiserver process to appear ...
	I0415 23:57:05.787650   25488 api_server.go:88] waiting for apiserver healthz status ...
	I0415 23:57:05.787669   25488 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I0415 23:57:05.793609   25488 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I0415 23:57:05.793713   25488 round_trippers.go:463] GET https://192.168.39.41:8443/version
	I0415 23:57:05.793724   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:05.793731   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:05.793736   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:05.794617   25488 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0415 23:57:05.794788   25488 api_server.go:141] control plane version: v1.29.3
	I0415 23:57:05.794807   25488 api_server.go:131] duration metric: took 7.151331ms to wait for apiserver health ...
	I0415 23:57:05.794814   25488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0415 23:57:05.968207   25488 request.go:629] Waited for 173.329742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:57:05.968283   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:57:05.968301   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:05.968331   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:05.968338   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:05.973417   25488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 23:57:05.978385   25488 system_pods.go:59] 17 kube-system pods found
	I0415 23:57:05.978406   25488 system_pods.go:61] "coredns-76f75df574-4sgv4" [3c1f65c0-37b2-4c88-879b-68297e989d44] Running
	I0415 23:57:05.978412   25488 system_pods.go:61] "coredns-76f75df574-zdc8q" [6a7e1a29-8c75-4d1f-978b-471ac0adb888] Running
	I0415 23:57:05.978416   25488 system_pods.go:61] "etcd-ha-694782" [ca5444f7-8fe5-4165-a01b-9c9adba4ede0] Running
	I0415 23:57:05.978419   25488 system_pods.go:61] "etcd-ha-694782-m02" [821ace46-8aac-46ae-9e3f-7bc144bb46a9] Running
	I0415 23:57:05.978422   25488 system_pods.go:61] "kindnet-99cs7" [5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3] Running
	I0415 23:57:05.978426   25488 system_pods.go:61] "kindnet-qvp8b" [04002e18-2673-4067-a10e-64f40e3c60c8] Running
	I0415 23:57:05.978429   25488 system_pods.go:61] "kube-apiserver-ha-694782" [42680d27-9926-4b99-ae33-61a37afe0207] Running
	I0415 23:57:05.978432   25488 system_pods.go:61] "kube-apiserver-ha-694782-m02" [5db36efa-244b-47e0-ba6f-93826468c168] Running
	I0415 23:57:05.978435   25488 system_pods.go:61] "kube-controller-manager-ha-694782" [1832df1f-ac45-427c-93fc-04630558d7d1] Running
	I0415 23:57:05.978439   25488 system_pods.go:61] "kube-controller-manager-ha-694782-m02" [923c744c-e27c-468d-a14f-2a1de579df73] Running
	I0415 23:57:05.978443   25488 system_pods.go:61] "kube-proxy-d46v5" [c92235e6-1639-45c0-a92b-bf0cc32bea22] Running
	I0415 23:57:05.978446   25488 system_pods.go:61] "kube-proxy-vbfhn" [131197dd-aa5b-48c7-a0e8-d1772432b28c] Running
	I0415 23:57:05.978449   25488 system_pods.go:61] "kube-scheduler-ha-694782" [8e2ff44e-34ef-4cb6-9734-62004de985b8] Running
	I0415 23:57:05.978451   25488 system_pods.go:61] "kube-scheduler-ha-694782-m02" [e2452893-9792-41e9-9d9e-e2f66bc07303] Running
	I0415 23:57:05.978454   25488 system_pods.go:61] "kube-vip-ha-694782" [a8ffb1b9-f55e-4efe-b9a1-7e58a341a2f0] Running
	I0415 23:57:05.978457   25488 system_pods.go:61] "kube-vip-ha-694782-m02" [036ef70f-0af1-42a5-b0bb-5622785ff031] Running
	I0415 23:57:05.978459   25488 system_pods.go:61] "storage-provisioner" [bea9c166-5f83-473f-8f01-335ea1436dad] Running
	I0415 23:57:05.978464   25488 system_pods.go:74] duration metric: took 183.645542ms to wait for pod list to return data ...
	I0415 23:57:05.978474   25488 default_sa.go:34] waiting for default service account to be created ...
	I0415 23:57:06.167877   25488 request.go:629] Waited for 189.327065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/default/serviceaccounts
	I0415 23:57:06.167926   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/default/serviceaccounts
	I0415 23:57:06.167931   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:06.167939   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:06.167943   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:06.170989   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:06.171177   25488 default_sa.go:45] found service account: "default"
	I0415 23:57:06.171191   25488 default_sa.go:55] duration metric: took 192.709876ms for default service account to be created ...
	I0415 23:57:06.171198   25488 system_pods.go:116] waiting for k8s-apps to be running ...
	I0415 23:57:06.367530   25488 request.go:629] Waited for 196.280884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:57:06.367580   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:57:06.367585   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:06.367599   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:06.367616   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:06.372651   25488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 23:57:06.378400   25488 system_pods.go:86] 17 kube-system pods found
	I0415 23:57:06.378425   25488 system_pods.go:89] "coredns-76f75df574-4sgv4" [3c1f65c0-37b2-4c88-879b-68297e989d44] Running
	I0415 23:57:06.378432   25488 system_pods.go:89] "coredns-76f75df574-zdc8q" [6a7e1a29-8c75-4d1f-978b-471ac0adb888] Running
	I0415 23:57:06.378439   25488 system_pods.go:89] "etcd-ha-694782" [ca5444f7-8fe5-4165-a01b-9c9adba4ede0] Running
	I0415 23:57:06.378444   25488 system_pods.go:89] "etcd-ha-694782-m02" [821ace46-8aac-46ae-9e3f-7bc144bb46a9] Running
	I0415 23:57:06.378452   25488 system_pods.go:89] "kindnet-99cs7" [5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3] Running
	I0415 23:57:06.378461   25488 system_pods.go:89] "kindnet-qvp8b" [04002e18-2673-4067-a10e-64f40e3c60c8] Running
	I0415 23:57:06.378468   25488 system_pods.go:89] "kube-apiserver-ha-694782" [42680d27-9926-4b99-ae33-61a37afe0207] Running
	I0415 23:57:06.378478   25488 system_pods.go:89] "kube-apiserver-ha-694782-m02" [5db36efa-244b-47e0-ba6f-93826468c168] Running
	I0415 23:57:06.378485   25488 system_pods.go:89] "kube-controller-manager-ha-694782" [1832df1f-ac45-427c-93fc-04630558d7d1] Running
	I0415 23:57:06.378492   25488 system_pods.go:89] "kube-controller-manager-ha-694782-m02" [923c744c-e27c-468d-a14f-2a1de579df73] Running
	I0415 23:57:06.378500   25488 system_pods.go:89] "kube-proxy-d46v5" [c92235e6-1639-45c0-a92b-bf0cc32bea22] Running
	I0415 23:57:06.378507   25488 system_pods.go:89] "kube-proxy-vbfhn" [131197dd-aa5b-48c7-a0e8-d1772432b28c] Running
	I0415 23:57:06.378514   25488 system_pods.go:89] "kube-scheduler-ha-694782" [8e2ff44e-34ef-4cb6-9734-62004de985b8] Running
	I0415 23:57:06.378520   25488 system_pods.go:89] "kube-scheduler-ha-694782-m02" [e2452893-9792-41e9-9d9e-e2f66bc07303] Running
	I0415 23:57:06.378526   25488 system_pods.go:89] "kube-vip-ha-694782" [a8ffb1b9-f55e-4efe-b9a1-7e58a341a2f0] Running
	I0415 23:57:06.378534   25488 system_pods.go:89] "kube-vip-ha-694782-m02" [036ef70f-0af1-42a5-b0bb-5622785ff031] Running
	I0415 23:57:06.378541   25488 system_pods.go:89] "storage-provisioner" [bea9c166-5f83-473f-8f01-335ea1436dad] Running
	I0415 23:57:06.378554   25488 system_pods.go:126] duration metric: took 207.346934ms to wait for k8s-apps to be running ...
	I0415 23:57:06.378564   25488 system_svc.go:44] waiting for kubelet service to be running ....
	I0415 23:57:06.378618   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 23:57:06.394541   25488 system_svc.go:56] duration metric: took 15.973291ms WaitForService to wait for kubelet
	I0415 23:57:06.394563   25488 kubeadm.go:576] duration metric: took 16.369347744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 23:57:06.394586   25488 node_conditions.go:102] verifying NodePressure condition ...
	I0415 23:57:06.567901   25488 request.go:629] Waited for 173.249172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes
	I0415 23:57:06.567977   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes
	I0415 23:57:06.567985   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:06.567992   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:06.567998   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:06.571207   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:06.571919   25488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 23:57:06.571939   25488 node_conditions.go:123] node cpu capacity is 2
	I0415 23:57:06.571949   25488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 23:57:06.571953   25488 node_conditions.go:123] node cpu capacity is 2
	I0415 23:57:06.571957   25488 node_conditions.go:105] duration metric: took 177.367096ms to run NodePressure ...
	I0415 23:57:06.571968   25488 start.go:240] waiting for startup goroutines ...
	I0415 23:57:06.571991   25488 start.go:254] writing updated cluster config ...
	I0415 23:57:06.574233   25488 out.go:177] 
	I0415 23:57:06.575616   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:57:06.575696   25488 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0415 23:57:06.577368   25488 out.go:177] * Starting "ha-694782-m03" control-plane node in "ha-694782" cluster
	I0415 23:57:06.578457   25488 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0415 23:57:06.578483   25488 cache.go:56] Caching tarball of preloaded images
	I0415 23:57:06.578590   25488 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0415 23:57:06.578605   25488 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0415 23:57:06.578720   25488 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0415 23:57:06.578917   25488 start.go:360] acquireMachinesLock for ha-694782-m03: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 23:57:06.578973   25488 start.go:364] duration metric: took 34.46µs to acquireMachinesLock for "ha-694782-m03"
	I0415 23:57:06.578998   25488 start.go:93] Provisioning new machine with config: &{Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.42 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0415 23:57:06.579129   25488 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0415 23:57:06.580670   25488 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 23:57:06.580762   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:57:06.580804   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:57:06.594970   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37621
	I0415 23:57:06.595365   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:57:06.595804   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:57:06.595841   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:57:06.596124   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:57:06.596310   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetMachineName
	I0415 23:57:06.596444   25488 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0415 23:57:06.596627   25488 start.go:159] libmachine.API.Create for "ha-694782" (driver="kvm2")
	I0415 23:57:06.596654   25488 client.go:168] LocalClient.Create starting
	I0415 23:57:06.596683   25488 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem
	I0415 23:57:06.596711   25488 main.go:141] libmachine: Decoding PEM data...
	I0415 23:57:06.596725   25488 main.go:141] libmachine: Parsing certificate...
	I0415 23:57:06.596805   25488 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem
	I0415 23:57:06.596849   25488 main.go:141] libmachine: Decoding PEM data...
	I0415 23:57:06.596866   25488 main.go:141] libmachine: Parsing certificate...
	I0415 23:57:06.596891   25488 main.go:141] libmachine: Running pre-create checks...
	I0415 23:57:06.596903   25488 main.go:141] libmachine: (ha-694782-m03) Calling .PreCreateCheck
	I0415 23:57:06.597061   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetConfigRaw
	I0415 23:57:06.597465   25488 main.go:141] libmachine: Creating machine...
	I0415 23:57:06.597478   25488 main.go:141] libmachine: (ha-694782-m03) Calling .Create
	I0415 23:57:06.597645   25488 main.go:141] libmachine: (ha-694782-m03) Creating KVM machine...
	I0415 23:57:06.598864   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found existing default KVM network
	I0415 23:57:06.598982   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found existing private KVM network mk-ha-694782
	I0415 23:57:06.599098   25488 main.go:141] libmachine: (ha-694782-m03) Setting up store path in /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03 ...
	I0415 23:57:06.599125   25488 main.go:141] libmachine: (ha-694782-m03) Building disk image from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso
	I0415 23:57:06.599175   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:06.599061   26272 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:57:06.599225   25488 main.go:141] libmachine: (ha-694782-m03) Downloading /home/jenkins/minikube-integration/18647-7542/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 23:57:06.807841   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:06.807723   26272 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa...
	I0415 23:57:06.939686   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:06.939574   26272 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/ha-694782-m03.rawdisk...
	I0415 23:57:06.939723   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Writing magic tar header
	I0415 23:57:06.939734   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Writing SSH key tar header
	I0415 23:57:06.939743   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:06.939679   26272 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03 ...
	I0415 23:57:06.939787   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03
	I0415 23:57:06.939812   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines
	I0415 23:57:06.939834   25488 main.go:141] libmachine: (ha-694782-m03) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03 (perms=drwx------)
	I0415 23:57:06.939847   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:57:06.939859   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542
	I0415 23:57:06.939867   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0415 23:57:06.939876   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Checking permissions on dir: /home/jenkins
	I0415 23:57:06.939888   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Checking permissions on dir: /home
	I0415 23:57:06.939903   25488 main.go:141] libmachine: (ha-694782-m03) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines (perms=drwxr-xr-x)
	I0415 23:57:06.939911   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Skipping /home - not owner
	I0415 23:57:06.939925   25488 main.go:141] libmachine: (ha-694782-m03) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube (perms=drwxr-xr-x)
	I0415 23:57:06.939943   25488 main.go:141] libmachine: (ha-694782-m03) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542 (perms=drwxrwxr-x)
	I0415 23:57:06.939960   25488 main.go:141] libmachine: (ha-694782-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0415 23:57:06.939975   25488 main.go:141] libmachine: (ha-694782-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0415 23:57:06.939987   25488 main.go:141] libmachine: (ha-694782-m03) Creating domain...
	I0415 23:57:06.940842   25488 main.go:141] libmachine: (ha-694782-m03) define libvirt domain using xml: 
	I0415 23:57:06.940863   25488 main.go:141] libmachine: (ha-694782-m03) <domain type='kvm'>
	I0415 23:57:06.940872   25488 main.go:141] libmachine: (ha-694782-m03)   <name>ha-694782-m03</name>
	I0415 23:57:06.940880   25488 main.go:141] libmachine: (ha-694782-m03)   <memory unit='MiB'>2200</memory>
	I0415 23:57:06.940928   25488 main.go:141] libmachine: (ha-694782-m03)   <vcpu>2</vcpu>
	I0415 23:57:06.940954   25488 main.go:141] libmachine: (ha-694782-m03)   <features>
	I0415 23:57:06.940974   25488 main.go:141] libmachine: (ha-694782-m03)     <acpi/>
	I0415 23:57:06.940993   25488 main.go:141] libmachine: (ha-694782-m03)     <apic/>
	I0415 23:57:06.941006   25488 main.go:141] libmachine: (ha-694782-m03)     <pae/>
	I0415 23:57:06.941013   25488 main.go:141] libmachine: (ha-694782-m03)     
	I0415 23:57:06.941022   25488 main.go:141] libmachine: (ha-694782-m03)   </features>
	I0415 23:57:06.941027   25488 main.go:141] libmachine: (ha-694782-m03)   <cpu mode='host-passthrough'>
	I0415 23:57:06.941035   25488 main.go:141] libmachine: (ha-694782-m03)   
	I0415 23:57:06.941041   25488 main.go:141] libmachine: (ha-694782-m03)   </cpu>
	I0415 23:57:06.941050   25488 main.go:141] libmachine: (ha-694782-m03)   <os>
	I0415 23:57:06.941061   25488 main.go:141] libmachine: (ha-694782-m03)     <type>hvm</type>
	I0415 23:57:06.941077   25488 main.go:141] libmachine: (ha-694782-m03)     <boot dev='cdrom'/>
	I0415 23:57:06.941093   25488 main.go:141] libmachine: (ha-694782-m03)     <boot dev='hd'/>
	I0415 23:57:06.941100   25488 main.go:141] libmachine: (ha-694782-m03)     <bootmenu enable='no'/>
	I0415 23:57:06.941106   25488 main.go:141] libmachine: (ha-694782-m03)   </os>
	I0415 23:57:06.941112   25488 main.go:141] libmachine: (ha-694782-m03)   <devices>
	I0415 23:57:06.941120   25488 main.go:141] libmachine: (ha-694782-m03)     <disk type='file' device='cdrom'>
	I0415 23:57:06.941129   25488 main.go:141] libmachine: (ha-694782-m03)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/boot2docker.iso'/>
	I0415 23:57:06.941138   25488 main.go:141] libmachine: (ha-694782-m03)       <target dev='hdc' bus='scsi'/>
	I0415 23:57:06.941143   25488 main.go:141] libmachine: (ha-694782-m03)       <readonly/>
	I0415 23:57:06.941149   25488 main.go:141] libmachine: (ha-694782-m03)     </disk>
	I0415 23:57:06.941171   25488 main.go:141] libmachine: (ha-694782-m03)     <disk type='file' device='disk'>
	I0415 23:57:06.941188   25488 main.go:141] libmachine: (ha-694782-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0415 23:57:06.941202   25488 main.go:141] libmachine: (ha-694782-m03)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/ha-694782-m03.rawdisk'/>
	I0415 23:57:06.941217   25488 main.go:141] libmachine: (ha-694782-m03)       <target dev='hda' bus='virtio'/>
	I0415 23:57:06.941226   25488 main.go:141] libmachine: (ha-694782-m03)     </disk>
	I0415 23:57:06.941231   25488 main.go:141] libmachine: (ha-694782-m03)     <interface type='network'>
	I0415 23:57:06.941239   25488 main.go:141] libmachine: (ha-694782-m03)       <source network='mk-ha-694782'/>
	I0415 23:57:06.941244   25488 main.go:141] libmachine: (ha-694782-m03)       <model type='virtio'/>
	I0415 23:57:06.941252   25488 main.go:141] libmachine: (ha-694782-m03)     </interface>
	I0415 23:57:06.941260   25488 main.go:141] libmachine: (ha-694782-m03)     <interface type='network'>
	I0415 23:57:06.941273   25488 main.go:141] libmachine: (ha-694782-m03)       <source network='default'/>
	I0415 23:57:06.941284   25488 main.go:141] libmachine: (ha-694782-m03)       <model type='virtio'/>
	I0415 23:57:06.941299   25488 main.go:141] libmachine: (ha-694782-m03)     </interface>
	I0415 23:57:06.941318   25488 main.go:141] libmachine: (ha-694782-m03)     <serial type='pty'>
	I0415 23:57:06.941331   25488 main.go:141] libmachine: (ha-694782-m03)       <target port='0'/>
	I0415 23:57:06.941345   25488 main.go:141] libmachine: (ha-694782-m03)     </serial>
	I0415 23:57:06.941361   25488 main.go:141] libmachine: (ha-694782-m03)     <console type='pty'>
	I0415 23:57:06.941374   25488 main.go:141] libmachine: (ha-694782-m03)       <target type='serial' port='0'/>
	I0415 23:57:06.941400   25488 main.go:141] libmachine: (ha-694782-m03)     </console>
	I0415 23:57:06.941421   25488 main.go:141] libmachine: (ha-694782-m03)     <rng model='virtio'>
	I0415 23:57:06.941436   25488 main.go:141] libmachine: (ha-694782-m03)       <backend model='random'>/dev/random</backend>
	I0415 23:57:06.941448   25488 main.go:141] libmachine: (ha-694782-m03)     </rng>
	I0415 23:57:06.941461   25488 main.go:141] libmachine: (ha-694782-m03)     
	I0415 23:57:06.941472   25488 main.go:141] libmachine: (ha-694782-m03)     
	I0415 23:57:06.941481   25488 main.go:141] libmachine: (ha-694782-m03)   </devices>
	I0415 23:57:06.941492   25488 main.go:141] libmachine: (ha-694782-m03) </domain>
	I0415 23:57:06.941503   25488 main.go:141] libmachine: (ha-694782-m03) 
	I0415 23:57:06.947763   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:80:00:d1 in network default
	I0415 23:57:06.948312   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:06.948352   25488 main.go:141] libmachine: (ha-694782-m03) Ensuring networks are active...
	I0415 23:57:06.949029   25488 main.go:141] libmachine: (ha-694782-m03) Ensuring network default is active
	I0415 23:57:06.949430   25488 main.go:141] libmachine: (ha-694782-m03) Ensuring network mk-ha-694782 is active
	I0415 23:57:06.949932   25488 main.go:141] libmachine: (ha-694782-m03) Getting domain xml...
	I0415 23:57:06.950780   25488 main.go:141] libmachine: (ha-694782-m03) Creating domain...
	I0415 23:57:08.146089   25488 main.go:141] libmachine: (ha-694782-m03) Waiting to get IP...
	I0415 23:57:08.146865   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:08.147249   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:08.147298   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:08.147223   26272 retry.go:31] will retry after 195.294878ms: waiting for machine to come up
	I0415 23:57:08.344769   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:08.345348   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:08.345379   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:08.345290   26272 retry.go:31] will retry after 281.825029ms: waiting for machine to come up
	I0415 23:57:08.628634   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:08.629005   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:08.629037   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:08.628953   26272 retry.go:31] will retry after 306.772461ms: waiting for machine to come up
	I0415 23:57:08.937440   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:08.937911   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:08.937939   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:08.937869   26272 retry.go:31] will retry after 407.267476ms: waiting for machine to come up
	I0415 23:57:09.346382   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:09.346839   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:09.346935   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:09.346785   26272 retry.go:31] will retry after 748.889119ms: waiting for machine to come up
	I0415 23:57:10.097393   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:10.097864   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:10.097894   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:10.097802   26272 retry.go:31] will retry after 801.012058ms: waiting for machine to come up
	I0415 23:57:10.900916   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:10.901326   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:10.901890   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:10.901296   26272 retry.go:31] will retry after 1.005790352s: waiting for machine to come up
	I0415 23:57:11.909288   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:11.909764   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:11.909783   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:11.909716   26272 retry.go:31] will retry after 1.299462671s: waiting for machine to come up
	I0415 23:57:13.210322   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:13.210812   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:13.210842   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:13.210767   26272 retry.go:31] will retry after 1.14091487s: waiting for machine to come up
	I0415 23:57:14.352805   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:14.353277   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:14.353312   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:14.353248   26272 retry.go:31] will retry after 1.449833548s: waiting for machine to come up
	I0415 23:57:15.805237   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:15.805651   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:15.805690   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:15.805615   26272 retry.go:31] will retry after 2.394178992s: waiting for machine to come up
	I0415 23:57:18.202221   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:18.202526   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:18.202552   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:18.202490   26272 retry.go:31] will retry after 2.938714927s: waiting for machine to come up
	I0415 23:57:21.144413   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:21.144796   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:21.144822   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:21.144764   26272 retry.go:31] will retry after 3.228906937s: waiting for machine to come up
	I0415 23:57:24.374842   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:24.375220   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:24.375251   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:24.375182   26272 retry.go:31] will retry after 3.573523595s: waiting for machine to come up
	I0415 23:57:27.950696   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:27.951198   25488 main.go:141] libmachine: (ha-694782-m03) Found IP for machine: 192.168.39.202
	I0415 23:57:27.951230   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has current primary IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:27.951239   25488 main.go:141] libmachine: (ha-694782-m03) Reserving static IP address...
	I0415 23:57:27.951582   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find host DHCP lease matching {name: "ha-694782-m03", mac: "52:54:00:fc:a7:e5", ip: "192.168.39.202"} in network mk-ha-694782
	I0415 23:57:28.021023   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Getting to WaitForSSH function...
	I0415 23:57:28.021056   25488 main.go:141] libmachine: (ha-694782-m03) Reserved static IP address: 192.168.39.202
	I0415 23:57:28.021069   25488 main.go:141] libmachine: (ha-694782-m03) Waiting for SSH to be available...
	I0415 23:57:28.023528   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.023940   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:28.023972   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.024133   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Using SSH client type: external
	I0415 23:57:28.024161   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa (-rw-------)
	I0415 23:57:28.024196   25488 main.go:141] libmachine: (ha-694782-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0415 23:57:28.024229   25488 main.go:141] libmachine: (ha-694782-m03) DBG | About to run SSH command:
	I0415 23:57:28.024247   25488 main.go:141] libmachine: (ha-694782-m03) DBG | exit 0
	I0415 23:57:28.149532   25488 main.go:141] libmachine: (ha-694782-m03) DBG | SSH cmd err, output: <nil>: 
	I0415 23:57:28.149836   25488 main.go:141] libmachine: (ha-694782-m03) KVM machine creation complete!
	I0415 23:57:28.150280   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetConfigRaw
	I0415 23:57:28.150866   25488 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0415 23:57:28.151102   25488 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0415 23:57:28.151298   25488 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0415 23:57:28.151330   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetState
	I0415 23:57:28.152508   25488 main.go:141] libmachine: Detecting operating system of created instance...
	I0415 23:57:28.152525   25488 main.go:141] libmachine: Waiting for SSH to be available...
	I0415 23:57:28.152532   25488 main.go:141] libmachine: Getting to WaitForSSH function...
	I0415 23:57:28.152540   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:28.155001   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.155403   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:28.155432   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.155565   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:28.155742   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:28.155930   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:28.156070   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:28.156225   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:57:28.156427   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0415 23:57:28.156438   25488 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0415 23:57:28.256333   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 23:57:28.256357   25488 main.go:141] libmachine: Detecting the provisioner...
	I0415 23:57:28.256365   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:28.259091   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.259442   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:28.259468   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.259614   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:28.259771   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:28.259944   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:28.260060   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:28.260196   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:57:28.260385   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0415 23:57:28.260418   25488 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0415 23:57:28.361739   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0415 23:57:28.361805   25488 main.go:141] libmachine: found compatible host: buildroot
	I0415 23:57:28.361820   25488 main.go:141] libmachine: Provisioning with buildroot...
	I0415 23:57:28.361834   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetMachineName
	I0415 23:57:28.362022   25488 buildroot.go:166] provisioning hostname "ha-694782-m03"
	I0415 23:57:28.362050   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetMachineName
	I0415 23:57:28.362227   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:28.364854   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.365242   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:28.365271   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.365425   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:28.365572   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:28.365709   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:28.365840   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:28.365973   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:57:28.366115   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0415 23:57:28.366126   25488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-694782-m03 && echo "ha-694782-m03" | sudo tee /etc/hostname
	I0415 23:57:28.483531   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-694782-m03
	
	I0415 23:57:28.483559   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:28.486200   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.486555   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:28.486605   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.486800   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:28.487006   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:28.487233   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:28.487396   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:28.487599   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:57:28.487806   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0415 23:57:28.487831   25488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-694782-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-694782-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-694782-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 23:57:28.602817   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 23:57:28.602850   25488 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0415 23:57:28.602865   25488 buildroot.go:174] setting up certificates
	I0415 23:57:28.602872   25488 provision.go:84] configureAuth start
	I0415 23:57:28.602880   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetMachineName
	I0415 23:57:28.603201   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetIP
	I0415 23:57:28.605867   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.606193   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:28.606218   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.606349   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:28.608370   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.608653   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:28.608674   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.608833   25488 provision.go:143] copyHostCerts
	I0415 23:57:28.608859   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0415 23:57:28.608895   25488 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0415 23:57:28.608903   25488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0415 23:57:28.608963   25488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0415 23:57:28.609033   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0415 23:57:28.609049   25488 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0415 23:57:28.609056   25488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0415 23:57:28.609078   25488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0415 23:57:28.609117   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0415 23:57:28.609132   25488 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0415 23:57:28.609138   25488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0415 23:57:28.609177   25488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0415 23:57:28.609240   25488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.ha-694782-m03 san=[127.0.0.1 192.168.39.202 ha-694782-m03 localhost minikube]
	I0415 23:57:28.872793   25488 provision.go:177] copyRemoteCerts
	I0415 23:57:28.872843   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 23:57:28.872864   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:28.875572   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.875995   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:28.876025   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.876251   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:28.876446   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:28.876604   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:28.876774   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa Username:docker}
	I0415 23:57:28.960633   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0415 23:57:28.960705   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 23:57:28.985245   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0415 23:57:28.985300   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0415 23:57:29.009855   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0415 23:57:29.009923   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 23:57:29.035029   25488 provision.go:87] duration metric: took 432.148559ms to configureAuth
	I0415 23:57:29.035052   25488 buildroot.go:189] setting minikube options for container-runtime
	I0415 23:57:29.035269   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:57:29.035345   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:29.038305   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.038750   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:29.038779   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.038995   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:29.039168   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:29.039344   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:29.039529   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:29.039672   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:57:29.039837   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0415 23:57:29.039851   25488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0415 23:57:29.309640   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0415 23:57:29.309674   25488 main.go:141] libmachine: Checking connection to Docker...
	I0415 23:57:29.309691   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetURL
	I0415 23:57:29.311113   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Using libvirt version 6000000
	I0415 23:57:29.313355   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.313714   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:29.313735   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.313950   25488 main.go:141] libmachine: Docker is up and running!
	I0415 23:57:29.313966   25488 main.go:141] libmachine: Reticulating splines...
	I0415 23:57:29.313974   25488 client.go:171] duration metric: took 22.717310883s to LocalClient.Create
	I0415 23:57:29.314003   25488 start.go:167] duration metric: took 22.717376374s to libmachine.API.Create "ha-694782"
	I0415 23:57:29.314015   25488 start.go:293] postStartSetup for "ha-694782-m03" (driver="kvm2")
	I0415 23:57:29.314078   25488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 23:57:29.314110   25488 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0415 23:57:29.314353   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 23:57:29.314374   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:29.316416   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.316723   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:29.316744   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.316946   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:29.317102   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:29.317271   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:29.317407   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa Username:docker}
	I0415 23:57:29.402248   25488 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 23:57:29.406836   25488 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 23:57:29.406857   25488 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0415 23:57:29.406919   25488 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0415 23:57:29.406984   25488 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0415 23:57:29.406993   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /etc/ssl/certs/148972.pem
	I0415 23:57:29.407068   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 23:57:29.418115   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0415 23:57:29.444270   25488 start.go:296] duration metric: took 130.19682ms for postStartSetup
	I0415 23:57:29.444335   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetConfigRaw
	I0415 23:57:29.444968   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetIP
	I0415 23:57:29.447458   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.447868   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:29.447903   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.448153   25488 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0415 23:57:29.448381   25488 start.go:128] duration metric: took 22.869241647s to createHost
	I0415 23:57:29.448403   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:29.450452   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.450762   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:29.450782   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.450949   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:29.451117   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:29.451290   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:29.451426   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:29.451593   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:57:29.451741   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0415 23:57:29.451753   25488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 23:57:29.554079   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713225449.533957299
	
	I0415 23:57:29.554100   25488 fix.go:216] guest clock: 1713225449.533957299
	I0415 23:57:29.554109   25488 fix.go:229] Guest: 2024-04-15 23:57:29.533957299 +0000 UTC Remote: 2024-04-15 23:57:29.448393913 +0000 UTC m=+158.888028227 (delta=85.563386ms)
	I0415 23:57:29.554126   25488 fix.go:200] guest clock delta is within tolerance: 85.563386ms
	I0415 23:57:29.554132   25488 start.go:83] releasing machines lock for "ha-694782-m03", held for 22.975147828s
	I0415 23:57:29.554154   25488 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0415 23:57:29.554388   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetIP
	I0415 23:57:29.556642   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.557028   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:29.557058   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.559465   25488 out.go:177] * Found network options:
	I0415 23:57:29.560951   25488 out.go:177]   - NO_PROXY=192.168.39.41,192.168.39.42
	W0415 23:57:29.562166   25488 proxy.go:119] fail to check proxy env: Error ip not in block
	W0415 23:57:29.562201   25488 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 23:57:29.562217   25488 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0415 23:57:29.562677   25488 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0415 23:57:29.562864   25488 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0415 23:57:29.562967   25488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 23:57:29.563005   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	W0415 23:57:29.563039   25488 proxy.go:119] fail to check proxy env: Error ip not in block
	W0415 23:57:29.563062   25488 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 23:57:29.563131   25488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0415 23:57:29.563153   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:29.565514   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.565763   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.565936   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:29.565962   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.566088   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:29.566222   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:29.566248   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.566252   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:29.566409   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:29.566468   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:29.566551   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa Username:docker}
	I0415 23:57:29.566647   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:29.566772   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:29.566924   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa Username:docker}
	I0415 23:57:29.799727   25488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 23:57:29.806086   25488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 23:57:29.806138   25488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 23:57:29.825428   25488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 23:57:29.825450   25488 start.go:494] detecting cgroup driver to use...
	I0415 23:57:29.825518   25488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 23:57:29.844091   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 23:57:29.860695   25488 docker.go:217] disabling cri-docker service (if available) ...
	I0415 23:57:29.860751   25488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0415 23:57:29.875728   25488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0415 23:57:29.889960   25488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0415 23:57:30.014990   25488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0415 23:57:30.165809   25488 docker.go:233] disabling docker service ...
	I0415 23:57:30.165883   25488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0415 23:57:30.181877   25488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0415 23:57:30.197880   25488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0415 23:57:30.343229   25488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0415 23:57:30.471525   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0415 23:57:30.486105   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 23:57:30.505857   25488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0415 23:57:30.505926   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:57:30.516326   25488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0415 23:57:30.516369   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:57:30.527547   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:57:30.538826   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:57:30.549726   25488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 23:57:30.561383   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:57:30.572164   25488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:57:30.590351   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:57:30.601575   25488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 23:57:30.613199   25488 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0415 23:57:30.613265   25488 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0415 23:57:30.628406   25488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 23:57:30.638978   25488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:57:30.750989   25488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0415 23:57:30.897712   25488 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0415 23:57:30.897780   25488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0415 23:57:30.902369   25488 start.go:562] Will wait 60s for crictl version
	I0415 23:57:30.902412   25488 ssh_runner.go:195] Run: which crictl
	I0415 23:57:30.906038   25488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 23:57:30.942946   25488 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0415 23:57:30.943006   25488 ssh_runner.go:195] Run: crio --version
	I0415 23:57:30.972511   25488 ssh_runner.go:195] Run: crio --version
	I0415 23:57:31.002583   25488 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0415 23:57:31.003890   25488 out.go:177]   - env NO_PROXY=192.168.39.41
	I0415 23:57:31.005018   25488 out.go:177]   - env NO_PROXY=192.168.39.41,192.168.39.42
	I0415 23:57:31.006091   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetIP
	I0415 23:57:31.008781   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:31.009173   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:31.009194   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:31.009440   25488 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0415 23:57:31.013685   25488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 23:57:31.027856   25488 mustload.go:65] Loading cluster: ha-694782
	I0415 23:57:31.028098   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:57:31.028338   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:57:31.028370   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:57:31.043709   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39309
	I0415 23:57:31.044095   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:57:31.044546   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:57:31.044570   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:57:31.044870   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:57:31.045053   25488 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0415 23:57:31.046503   25488 host.go:66] Checking if "ha-694782" exists ...
	I0415 23:57:31.046809   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:57:31.046846   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:57:31.061411   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37687
	I0415 23:57:31.061758   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:57:31.062109   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:57:31.062131   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:57:31.062456   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:57:31.062626   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:57:31.062773   25488 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782 for IP: 192.168.39.202
	I0415 23:57:31.062786   25488 certs.go:194] generating shared ca certs ...
	I0415 23:57:31.062800   25488 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:57:31.062905   25488 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0415 23:57:31.062944   25488 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0415 23:57:31.062953   25488 certs.go:256] generating profile certs ...
	I0415 23:57:31.063022   25488 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.key
	I0415 23:57:31.063056   25488 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.9202ceb3
	I0415 23:57:31.063071   25488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.9202ceb3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.41 192.168.39.42 192.168.39.202 192.168.39.254]
	I0415 23:57:31.304099   25488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.9202ceb3 ...
	I0415 23:57:31.304128   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.9202ceb3: {Name:mk5d93d5502ef9674a3a4ff2b2b025bc5f57c78a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:57:31.304287   25488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.9202ceb3 ...
	I0415 23:57:31.304300   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.9202ceb3: {Name:mk6251073914dc8969df401bc5afd5ce24c8c412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:57:31.304366   25488 certs.go:381] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.9202ceb3 -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt
	I0415 23:57:31.304482   25488 certs.go:385] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.9202ceb3 -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key
	I0415 23:57:31.304596   25488 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key
	I0415 23:57:31.304611   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 23:57:31.304622   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0415 23:57:31.304636   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 23:57:31.304648   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 23:57:31.304660   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0415 23:57:31.304670   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0415 23:57:31.304680   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0415 23:57:31.304689   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0415 23:57:31.304729   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0415 23:57:31.304758   25488 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0415 23:57:31.304769   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0415 23:57:31.304792   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0415 23:57:31.304813   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0415 23:57:31.304834   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0415 23:57:31.304868   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0415 23:57:31.304893   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:57:31.304906   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem -> /usr/share/ca-certificates/14897.pem
	I0415 23:57:31.304920   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /usr/share/ca-certificates/148972.pem
	I0415 23:57:31.304949   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:57:31.308077   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:57:31.308484   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:57:31.308508   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:57:31.308670   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:57:31.308937   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:57:31.309073   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:57:31.309254   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:57:31.389417   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0415 23:57:31.395013   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0415 23:57:31.407739   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0415 23:57:31.412369   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0415 23:57:31.424670   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0415 23:57:31.434247   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0415 23:57:31.452257   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0415 23:57:31.457507   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0415 23:57:31.469375   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0415 23:57:31.474123   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0415 23:57:31.485208   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0415 23:57:31.494735   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0415 23:57:31.507421   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 23:57:31.533866   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 23:57:31.558206   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 23:57:31.581053   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 23:57:31.605680   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0415 23:57:31.630618   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0415 23:57:31.655170   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 23:57:31.679377   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 23:57:31.703003   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 23:57:31.728280   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0415 23:57:31.752270   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0415 23:57:31.776104   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0415 23:57:31.792843   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0415 23:57:31.810183   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0415 23:57:31.827552   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0415 23:57:31.845300   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0415 23:57:31.862110   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0415 23:57:31.879497   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0415 23:57:31.899791   25488 ssh_runner.go:195] Run: openssl version
	I0415 23:57:31.906073   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0415 23:57:31.918899   25488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0415 23:57:31.923589   25488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0415 23:57:31.923638   25488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0415 23:57:31.929638   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 23:57:31.941146   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 23:57:31.951985   25488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:57:31.956780   25488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:57:31.956834   25488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:57:31.962575   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 23:57:31.974207   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0415 23:57:31.987019   25488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0415 23:57:31.991740   25488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0415 23:57:31.991783   25488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0415 23:57:31.998013   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0415 23:57:32.009793   25488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 23:57:32.014183   25488 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 23:57:32.014229   25488 kubeadm.go:928] updating node {m03 192.168.39.202 8443 v1.29.3 crio true true} ...
	I0415 23:57:32.014309   25488 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-694782-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 23:57:32.014333   25488 kube-vip.go:111] generating kube-vip config ...
	I0415 23:57:32.014394   25488 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0415 23:57:32.031987   25488 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0415 23:57:32.032055   25488 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0415 23:57:32.032107   25488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 23:57:32.042291   25488 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0415 23:57:32.042338   25488 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0415 23:57:32.051851   25488 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0415 23:57:32.051901   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 23:57:32.051852   25488 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0415 23:57:32.051974   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0415 23:57:32.051853   25488 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0415 23:57:32.051998   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0415 23:57:32.052053   25488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0415 23:57:32.052075   25488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0415 23:57:32.066673   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0415 23:57:32.066736   25488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0415 23:57:32.066755   25488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0415 23:57:32.066762   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0415 23:57:32.066776   25488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0415 23:57:32.066795   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0415 23:57:32.079834   25488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0415 23:57:32.079867   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0415 23:57:33.032463   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0415 23:57:33.043671   25488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0415 23:57:33.061642   25488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 23:57:33.078893   25488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0415 23:57:33.096120   25488 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0415 23:57:33.100090   25488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 23:57:33.112577   25488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:57:33.247188   25488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 23:57:33.263863   25488 host.go:66] Checking if "ha-694782" exists ...
	I0415 23:57:33.264226   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:57:33.264289   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:57:33.279497   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41213
	I0415 23:57:33.279968   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:57:33.280440   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:57:33.280468   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:57:33.280810   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:57:33.281028   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:57:33.281205   25488 start.go:316] joinCluster: &{Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.42 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:57:33.281342   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0415 23:57:33.281366   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:57:33.284394   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:57:33.284817   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:57:33.284847   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:57:33.284935   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:57:33.285116   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:57:33.285279   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:57:33.285444   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:57:33.458741   25488 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0415 23:57:33.458789   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9d0d7k.0di6w9ehac36jvk9 --discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-694782-m03 --control-plane --apiserver-advertise-address=192.168.39.202 --apiserver-bind-port=8443"
	I0415 23:57:58.852667   25488 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9d0d7k.0di6w9ehac36jvk9 --discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-694782-m03 --control-plane --apiserver-advertise-address=192.168.39.202 --apiserver-bind-port=8443": (25.393834117s)
	I0415 23:57:58.852715   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0415 23:57:59.273153   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-694782-m03 minikube.k8s.io/updated_at=2024_04_15T23_57_59_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=ha-694782 minikube.k8s.io/primary=false
	I0415 23:57:59.399817   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-694782-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0415 23:57:59.532722   25488 start.go:318] duration metric: took 26.251512438s to joinCluster
	I0415 23:57:59.532809   25488 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0415 23:57:59.534248   25488 out.go:177] * Verifying Kubernetes components...
	I0415 23:57:59.533191   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:57:59.535610   25488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:57:59.779110   25488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 23:57:59.806384   25488 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0415 23:57:59.806729   25488 kapi.go:59] client config for ha-694782: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.crt", KeyFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.key", CAFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0415 23:57:59.806809   25488 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.41:8443
	I0415 23:57:59.807136   25488 node_ready.go:35] waiting up to 6m0s for node "ha-694782-m03" to be "Ready" ...
	I0415 23:57:59.807232   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:57:59.807245   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:59.807255   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:59.807260   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:59.811378   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:00.307538   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:00.307565   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:00.307577   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:00.307583   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:00.311353   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:00.807659   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:00.807687   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:00.807697   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:00.807702   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:00.811898   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:01.307747   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:01.307786   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:01.307808   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:01.307814   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:01.311831   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:01.808032   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:01.808055   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:01.808074   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:01.808079   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:01.811940   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:01.812513   25488 node_ready.go:53] node "ha-694782-m03" has status "Ready":"False"
	I0415 23:58:02.308451   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:02.308503   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:02.308521   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:02.308531   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:02.312471   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:02.807595   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:02.807623   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:02.807635   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:02.807642   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:02.811097   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:03.307326   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:03.307343   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:03.307351   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:03.307355   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:03.313013   25488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 23:58:03.807460   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:03.807488   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:03.807499   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:03.807507   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:03.816561   25488 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0415 23:58:03.817615   25488 node_ready.go:53] node "ha-694782-m03" has status "Ready":"False"
	I0415 23:58:04.308153   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:04.308181   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.308193   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.308200   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.312664   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:04.807980   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:04.808001   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.808008   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.808012   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.811494   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:04.814230   25488 node_ready.go:49] node "ha-694782-m03" has status "Ready":"True"
	I0415 23:58:04.814260   25488 node_ready.go:38] duration metric: took 5.007094145s for node "ha-694782-m03" to be "Ready" ...
	I0415 23:58:04.814268   25488 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 23:58:04.814316   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:58:04.814324   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.814331   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.814338   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.820988   25488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 23:58:04.828368   25488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4sgv4" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:04.828444   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-4sgv4
	I0415 23:58:04.828455   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.828465   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.828472   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.831410   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:04.832107   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:04.832120   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.832127   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.832132   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.834561   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:04.835134   25488 pod_ready.go:92] pod "coredns-76f75df574-4sgv4" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:04.835156   25488 pod_ready.go:81] duration metric: took 6.766914ms for pod "coredns-76f75df574-4sgv4" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:04.835167   25488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-zdc8q" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:04.835226   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-zdc8q
	I0415 23:58:04.835237   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.835247   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.835257   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.837959   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:04.838602   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:04.838621   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.838632   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.838639   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.841400   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:04.841779   25488 pod_ready.go:92] pod "coredns-76f75df574-zdc8q" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:04.841793   25488 pod_ready.go:81] duration metric: took 6.61489ms for pod "coredns-76f75df574-zdc8q" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:04.841800   25488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:04.841856   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782
	I0415 23:58:04.841872   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.841881   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.841886   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.844909   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:04.845376   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:04.845394   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.845400   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.845404   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.848960   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:04.849416   25488 pod_ready.go:92] pod "etcd-ha-694782" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:04.849435   25488 pod_ready.go:81] duration metric: took 7.629414ms for pod "etcd-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:04.849446   25488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:04.849509   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:58:04.849519   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.849533   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.849542   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.852335   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:04.853344   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:58:04.853360   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.853369   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.853374   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.855966   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:04.856466   25488 pod_ready.go:92] pod "etcd-ha-694782-m02" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:04.856480   25488 pod_ready.go:81] duration metric: took 7.024362ms for pod "etcd-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:04.856487   25488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-694782-m03" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:05.008910   25488 request.go:629] Waited for 152.326528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:05.008964   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:05.008970   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:05.008980   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:05.008986   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:05.012691   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:05.208923   25488 request.go:629] Waited for 195.388645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:05.208980   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:05.208985   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:05.208993   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:05.209001   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:05.212927   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:05.408693   25488 request.go:629] Waited for 51.682685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:05.408752   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:05.408756   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:05.408763   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:05.408767   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:05.412168   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:05.608004   25488 request.go:629] Waited for 195.084833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:05.608453   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:05.608468   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:05.608484   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:05.608493   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:05.612135   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:05.857539   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:05.857558   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:05.857564   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:05.857569   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:05.860698   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:06.008046   25488 request.go:629] Waited for 146.154076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:06.008116   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:06.008123   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:06.008132   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:06.008138   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:06.011706   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:06.356794   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:06.356828   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:06.356839   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:06.356849   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:06.360307   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:06.408258   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:06.408280   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:06.408290   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:06.408297   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:06.412079   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:06.857333   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:06.857363   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:06.857371   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:06.857374   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:06.861184   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:06.862122   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:06.862138   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:06.862145   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:06.862149   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:06.864699   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:06.865460   25488 pod_ready.go:102] pod "etcd-ha-694782-m03" in "kube-system" namespace has status "Ready":"False"
	I0415 23:58:07.357287   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:07.357307   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:07.357316   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:07.357319   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:07.360418   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:07.361273   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:07.361289   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:07.361297   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:07.361301   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:07.363756   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:07.856730   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:07.856749   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:07.856757   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:07.856762   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:07.860157   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:07.861088   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:07.861106   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:07.861116   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:07.861123   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:07.863692   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:08.356931   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:08.356951   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:08.356959   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:08.356963   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:08.360395   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:08.361249   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:08.361264   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:08.361271   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:08.361275   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:08.364172   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:08.857564   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:08.857587   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:08.857595   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:08.857599   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:08.860797   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:08.861802   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:08.861816   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:08.861822   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:08.861825   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:08.864659   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:09.356709   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:09.356737   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:09.356747   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:09.356754   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:09.360029   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:09.360986   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:09.361000   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:09.361009   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:09.361016   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:09.364149   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:09.364728   25488 pod_ready.go:102] pod "etcd-ha-694782-m03" in "kube-system" namespace has status "Ready":"False"
	I0415 23:58:09.857096   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:09.857124   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:09.857132   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:09.857136   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:09.860639   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:09.861518   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:09.861533   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:09.861540   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:09.861544   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:09.864810   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:10.357501   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:10.357527   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:10.357538   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:10.357546   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:10.362709   25488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 23:58:10.363605   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:10.363623   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:10.363630   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:10.363634   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:10.366431   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:10.857472   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:10.857495   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:10.857506   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:10.857512   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:10.860802   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:10.861768   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:10.861784   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:10.861791   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:10.861795   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:10.865015   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:11.357068   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:11.357092   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:11.357103   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:11.357110   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:11.362218   25488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 23:58:11.363953   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:11.363969   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:11.363979   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:11.363983   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:11.366728   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:11.367333   25488 pod_ready.go:102] pod "etcd-ha-694782-m03" in "kube-system" namespace has status "Ready":"False"
	I0415 23:58:11.857128   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:11.857148   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:11.857177   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:11.857182   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:11.861447   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:11.862545   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:11.862568   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:11.862579   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:11.862585   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:11.868678   25488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 23:58:12.357180   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:12.357203   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:12.357214   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:12.357223   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:12.360930   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:12.361969   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:12.361988   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:12.361996   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:12.362002   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:12.366440   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:12.856785   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:12.856812   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:12.856821   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:12.856824   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:12.861311   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:12.862288   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:12.862304   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:12.862312   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:12.862316   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:12.865415   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:13.356956   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:13.356989   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.357015   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.357019   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.360924   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:13.361964   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:13.361980   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.361987   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.361990   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.364986   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:13.857583   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:13.857607   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.857616   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.857620   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.861613   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:13.862793   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:13.862808   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.862815   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.862821   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.870453   25488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 23:58:13.871146   25488 pod_ready.go:92] pod "etcd-ha-694782-m03" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:13.871170   25488 pod_ready.go:81] duration metric: took 9.014674778s for pod "etcd-ha-694782-m03" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:13.871193   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:13.871256   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782
	I0415 23:58:13.871266   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.871278   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.871288   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.875453   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:13.876310   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:13.876328   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.876338   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.876342   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.882640   25488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 23:58:13.883207   25488 pod_ready.go:92] pod "kube-apiserver-ha-694782" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:13.883227   25488 pod_ready.go:81] duration metric: took 12.024417ms for pod "kube-apiserver-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:13.883241   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:13.883318   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:58:13.883327   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.883337   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.883341   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.886414   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:13.887078   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:58:13.887096   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.887104   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.887110   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.890590   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:13.891684   25488 pod_ready.go:92] pod "kube-apiserver-ha-694782-m02" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:13.891710   25488 pod_ready.go:81] duration metric: took 8.453893ms for pod "kube-apiserver-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:13.891730   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-694782-m03" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:13.891797   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m03
	I0415 23:58:13.891809   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.891818   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.891824   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.896748   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:13.897402   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:13.897418   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.897426   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.897431   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.900299   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:13.900711   25488 pod_ready.go:92] pod "kube-apiserver-ha-694782-m03" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:13.900730   25488 pod_ready.go:81] duration metric: took 8.992398ms for pod "kube-apiserver-ha-694782-m03" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:13.900743   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:13.900795   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-694782
	I0415 23:58:13.900805   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.900815   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.900821   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.903736   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:14.008731   25488 request.go:629] Waited for 104.30565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:14.008808   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:14.008816   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:14.008832   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:14.008846   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:14.015570   25488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 23:58:14.016940   25488 pod_ready.go:92] pod "kube-controller-manager-ha-694782" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:14.016963   25488 pod_ready.go:81] duration metric: took 116.211401ms for pod "kube-controller-manager-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:14.016976   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:14.208414   25488 request.go:629] Waited for 191.362007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-694782-m02
	I0415 23:58:14.208468   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-694782-m02
	I0415 23:58:14.208473   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:14.208480   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:14.208485   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:14.212332   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:14.408142   25488 request.go:629] Waited for 195.052046ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:58:14.408209   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:58:14.408214   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:14.408221   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:14.408225   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:14.412165   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:14.412870   25488 pod_ready.go:92] pod "kube-controller-manager-ha-694782-m02" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:14.412887   25488 pod_ready.go:81] duration metric: took 395.903963ms for pod "kube-controller-manager-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:14.412896   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-694782-m03" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:14.609053   25488 request.go:629] Waited for 196.088761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-694782-m03
	I0415 23:58:14.609129   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-694782-m03
	I0415 23:58:14.609135   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:14.609143   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:14.609148   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:14.613237   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:14.808485   25488 request.go:629] Waited for 194.371577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:14.808555   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:14.808560   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:14.808567   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:14.808571   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:14.812404   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:14.812916   25488 pod_ready.go:92] pod "kube-controller-manager-ha-694782-m03" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:14.812938   25488 pod_ready.go:81] duration metric: took 400.033295ms for pod "kube-controller-manager-ha-694782-m03" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:14.812950   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-45tb9" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:15.008039   25488 request.go:629] Waited for 195.01746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-proxy-45tb9
	I0415 23:58:15.008127   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-proxy-45tb9
	I0415 23:58:15.008133   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:15.008145   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:15.008155   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:15.011680   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:15.208709   25488 request.go:629] Waited for 196.355673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:15.208773   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:15.208782   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:15.208792   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:15.208808   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:15.211907   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:15.212667   25488 pod_ready.go:92] pod "kube-proxy-45tb9" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:15.212683   25488 pod_ready.go:81] duration metric: took 399.725981ms for pod "kube-proxy-45tb9" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:15.212692   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d46v5" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:15.408191   25488 request.go:629] Waited for 195.445031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d46v5
	I0415 23:58:15.408240   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d46v5
	I0415 23:58:15.408245   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:15.408253   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:15.408258   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:15.412741   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:15.608519   25488 request.go:629] Waited for 194.910147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:15.608582   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:15.608603   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:15.608614   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:15.608626   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:15.612026   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:15.612855   25488 pod_ready.go:92] pod "kube-proxy-d46v5" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:15.612875   25488 pod_ready.go:81] duration metric: took 400.176563ms for pod "kube-proxy-d46v5" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:15.612889   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vbfhn" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:15.808585   25488 request.go:629] Waited for 195.634248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbfhn
	I0415 23:58:15.808673   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbfhn
	I0415 23:58:15.808685   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:15.808702   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:15.808712   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:15.812387   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:16.008600   25488 request.go:629] Waited for 195.378151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:58:16.008669   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:58:16.008674   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:16.008681   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:16.008687   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:16.012254   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:16.013049   25488 pod_ready.go:92] pod "kube-proxy-vbfhn" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:16.013073   25488 pod_ready.go:81] duration metric: took 400.175319ms for pod "kube-proxy-vbfhn" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:16.013085   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:16.208823   25488 request.go:629] Waited for 195.646579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782
	I0415 23:58:16.208899   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782
	I0415 23:58:16.208911   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:16.208922   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:16.208931   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:16.212616   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:16.408664   25488 request.go:629] Waited for 195.390521ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:16.408716   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:16.408722   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:16.408728   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:16.408733   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:16.412118   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:16.412978   25488 pod_ready.go:92] pod "kube-scheduler-ha-694782" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:16.412999   25488 pod_ready.go:81] duration metric: took 399.906718ms for pod "kube-scheduler-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:16.413008   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:16.608025   25488 request.go:629] Waited for 194.963642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782-m02
	I0415 23:58:16.608083   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782-m02
	I0415 23:58:16.608089   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:16.608115   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:16.608135   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:16.612560   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:16.808660   25488 request.go:629] Waited for 195.358773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:58:16.808727   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:58:16.808735   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:16.808744   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:16.808755   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:16.812045   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:16.812690   25488 pod_ready.go:92] pod "kube-scheduler-ha-694782-m02" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:16.812708   25488 pod_ready.go:81] duration metric: took 399.693364ms for pod "kube-scheduler-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:16.812717   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-694782-m03" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:17.008353   25488 request.go:629] Waited for 195.585806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782-m03
	I0415 23:58:17.008430   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782-m03
	I0415 23:58:17.008444   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:17.008451   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:17.008458   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:17.011870   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:17.209014   25488 request.go:629] Waited for 196.370317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:17.209072   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:17.209079   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:17.209088   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:17.209094   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:17.212479   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:17.213171   25488 pod_ready.go:92] pod "kube-scheduler-ha-694782-m03" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:17.213195   25488 pod_ready.go:81] duration metric: took 400.470661ms for pod "kube-scheduler-ha-694782-m03" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:17.213208   25488 pod_ready.go:38] duration metric: took 12.398931095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 23:58:17.213224   25488 api_server.go:52] waiting for apiserver process to appear ...
	I0415 23:58:17.213273   25488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 23:58:17.228851   25488 api_server.go:72] duration metric: took 17.69600783s to wait for apiserver process to appear ...
	I0415 23:58:17.228872   25488 api_server.go:88] waiting for apiserver healthz status ...
	I0415 23:58:17.228888   25488 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I0415 23:58:17.234999   25488 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I0415 23:58:17.235050   25488 round_trippers.go:463] GET https://192.168.39.41:8443/version
	I0415 23:58:17.235054   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:17.235061   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:17.235069   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:17.236121   25488 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0415 23:58:17.236273   25488 api_server.go:141] control plane version: v1.29.3
	I0415 23:58:17.236289   25488 api_server.go:131] duration metric: took 7.411501ms to wait for apiserver health ...
	I0415 23:58:17.236296   25488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0415 23:58:17.408739   25488 request.go:629] Waited for 172.34899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:58:17.408802   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:58:17.408810   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:17.408821   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:17.408832   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:17.416750   25488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 23:58:17.424979   25488 system_pods.go:59] 24 kube-system pods found
	I0415 23:58:17.425008   25488 system_pods.go:61] "coredns-76f75df574-4sgv4" [3c1f65c0-37b2-4c88-879b-68297e989d44] Running
	I0415 23:58:17.425014   25488 system_pods.go:61] "coredns-76f75df574-zdc8q" [6a7e1a29-8c75-4d1f-978b-471ac0adb888] Running
	I0415 23:58:17.425019   25488 system_pods.go:61] "etcd-ha-694782" [ca5444f7-8fe5-4165-a01b-9c9adba4ede0] Running
	I0415 23:58:17.425024   25488 system_pods.go:61] "etcd-ha-694782-m02" [821ace46-8aac-46ae-9e3f-7bc144bb46a9] Running
	I0415 23:58:17.425030   25488 system_pods.go:61] "etcd-ha-694782-m03" [ca51c45c-4bbf-48d8-91bd-f95a2c7ef894] Running
	I0415 23:58:17.425035   25488 system_pods.go:61] "kindnet-99cs7" [5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3] Running
	I0415 23:58:17.425041   25488 system_pods.go:61] "kindnet-hln6n" [da484432-677e-49d3-b01a-95b6392cceb9] Running
	I0415 23:58:17.425046   25488 system_pods.go:61] "kindnet-qvp8b" [04002e18-2673-4067-a10e-64f40e3c60c8] Running
	I0415 23:58:17.425054   25488 system_pods.go:61] "kube-apiserver-ha-694782" [42680d27-9926-4b99-ae33-61a37afe0207] Running
	I0415 23:58:17.425060   25488 system_pods.go:61] "kube-apiserver-ha-694782-m02" [5db36efa-244b-47e0-ba6f-93826468c168] Running
	I0415 23:58:17.425065   25488 system_pods.go:61] "kube-apiserver-ha-694782-m03" [1b573124-a8cd-4227-abfc-9f299843ec67] Running
	I0415 23:58:17.425072   25488 system_pods.go:61] "kube-controller-manager-ha-694782" [1832df1f-ac45-427c-93fc-04630558d7d1] Running
	I0415 23:58:17.425077   25488 system_pods.go:61] "kube-controller-manager-ha-694782-m02" [923c744c-e27c-468d-a14f-2a1de579df73] Running
	I0415 23:58:17.425083   25488 system_pods.go:61] "kube-controller-manager-ha-694782-m03" [b6b37886-5ac0-4e36-aef1-5df06f761cca] Running
	I0415 23:58:17.425092   25488 system_pods.go:61] "kube-proxy-45tb9" [c9f03669-c803-4ef2-9649-653cbd5ed50e] Running
	I0415 23:58:17.425098   25488 system_pods.go:61] "kube-proxy-d46v5" [c92235e6-1639-45c0-a92b-bf0cc32bea22] Running
	I0415 23:58:17.425105   25488 system_pods.go:61] "kube-proxy-vbfhn" [131197dd-aa5b-48c7-a0e8-d1772432b28c] Running
	I0415 23:58:17.425111   25488 system_pods.go:61] "kube-scheduler-ha-694782" [8e2ff44e-34ef-4cb6-9734-62004de985b8] Running
	I0415 23:58:17.425119   25488 system_pods.go:61] "kube-scheduler-ha-694782-m02" [e2452893-9792-41e9-9d9e-e2f66bc07303] Running
	I0415 23:58:17.425125   25488 system_pods.go:61] "kube-scheduler-ha-694782-m03" [9fb6255b-36f4-4f5f-8f20-3e7389ddbb55] Running
	I0415 23:58:17.425132   25488 system_pods.go:61] "kube-vip-ha-694782" [a8ffb1b9-f55e-4efe-b9a1-7e58a341a2f0] Running
	I0415 23:58:17.425138   25488 system_pods.go:61] "kube-vip-ha-694782-m02" [036ef70f-0af1-42a5-b0bb-5622785ff031] Running
	I0415 23:58:17.425143   25488 system_pods.go:61] "kube-vip-ha-694782-m03" [fc934534-c2d6-4454-93e1-8d8e2b791c72] Running
	I0415 23:58:17.425151   25488 system_pods.go:61] "storage-provisioner" [bea9c166-5f83-473f-8f01-335ea1436dad] Running
	I0415 23:58:17.425171   25488 system_pods.go:74] duration metric: took 188.868987ms to wait for pod list to return data ...
	I0415 23:58:17.425183   25488 default_sa.go:34] waiting for default service account to be created ...
	I0415 23:58:17.608582   25488 request.go:629] Waited for 183.32347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/default/serviceaccounts
	I0415 23:58:17.608653   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/default/serviceaccounts
	I0415 23:58:17.608661   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:17.608671   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:17.608677   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:17.612202   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:17.612534   25488 default_sa.go:45] found service account: "default"
	I0415 23:58:17.612555   25488 default_sa.go:55] duration metric: took 187.361301ms for default service account to be created ...
	I0415 23:58:17.612564   25488 system_pods.go:116] waiting for k8s-apps to be running ...
	I0415 23:58:17.808959   25488 request.go:629] Waited for 196.315133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:58:17.809036   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:58:17.809052   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:17.809062   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:17.809069   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:17.816661   25488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 23:58:17.824338   25488 system_pods.go:86] 24 kube-system pods found
	I0415 23:58:17.824363   25488 system_pods.go:89] "coredns-76f75df574-4sgv4" [3c1f65c0-37b2-4c88-879b-68297e989d44] Running
	I0415 23:58:17.824370   25488 system_pods.go:89] "coredns-76f75df574-zdc8q" [6a7e1a29-8c75-4d1f-978b-471ac0adb888] Running
	I0415 23:58:17.824376   25488 system_pods.go:89] "etcd-ha-694782" [ca5444f7-8fe5-4165-a01b-9c9adba4ede0] Running
	I0415 23:58:17.824383   25488 system_pods.go:89] "etcd-ha-694782-m02" [821ace46-8aac-46ae-9e3f-7bc144bb46a9] Running
	I0415 23:58:17.824394   25488 system_pods.go:89] "etcd-ha-694782-m03" [ca51c45c-4bbf-48d8-91bd-f95a2c7ef894] Running
	I0415 23:58:17.824405   25488 system_pods.go:89] "kindnet-99cs7" [5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3] Running
	I0415 23:58:17.824412   25488 system_pods.go:89] "kindnet-hln6n" [da484432-677e-49d3-b01a-95b6392cceb9] Running
	I0415 23:58:17.824422   25488 system_pods.go:89] "kindnet-qvp8b" [04002e18-2673-4067-a10e-64f40e3c60c8] Running
	I0415 23:58:17.824431   25488 system_pods.go:89] "kube-apiserver-ha-694782" [42680d27-9926-4b99-ae33-61a37afe0207] Running
	I0415 23:58:17.824441   25488 system_pods.go:89] "kube-apiserver-ha-694782-m02" [5db36efa-244b-47e0-ba6f-93826468c168] Running
	I0415 23:58:17.824449   25488 system_pods.go:89] "kube-apiserver-ha-694782-m03" [1b573124-a8cd-4227-abfc-9f299843ec67] Running
	I0415 23:58:17.824459   25488 system_pods.go:89] "kube-controller-manager-ha-694782" [1832df1f-ac45-427c-93fc-04630558d7d1] Running
	I0415 23:58:17.824467   25488 system_pods.go:89] "kube-controller-manager-ha-694782-m02" [923c744c-e27c-468d-a14f-2a1de579df73] Running
	I0415 23:58:17.824475   25488 system_pods.go:89] "kube-controller-manager-ha-694782-m03" [b6b37886-5ac0-4e36-aef1-5df06f761cca] Running
	I0415 23:58:17.824484   25488 system_pods.go:89] "kube-proxy-45tb9" [c9f03669-c803-4ef2-9649-653cbd5ed50e] Running
	I0415 23:58:17.824494   25488 system_pods.go:89] "kube-proxy-d46v5" [c92235e6-1639-45c0-a92b-bf0cc32bea22] Running
	I0415 23:58:17.824500   25488 system_pods.go:89] "kube-proxy-vbfhn" [131197dd-aa5b-48c7-a0e8-d1772432b28c] Running
	I0415 23:58:17.824511   25488 system_pods.go:89] "kube-scheduler-ha-694782" [8e2ff44e-34ef-4cb6-9734-62004de985b8] Running
	I0415 23:58:17.824521   25488 system_pods.go:89] "kube-scheduler-ha-694782-m02" [e2452893-9792-41e9-9d9e-e2f66bc07303] Running
	I0415 23:58:17.824529   25488 system_pods.go:89] "kube-scheduler-ha-694782-m03" [9fb6255b-36f4-4f5f-8f20-3e7389ddbb55] Running
	I0415 23:58:17.824538   25488 system_pods.go:89] "kube-vip-ha-694782" [a8ffb1b9-f55e-4efe-b9a1-7e58a341a2f0] Running
	I0415 23:58:17.824545   25488 system_pods.go:89] "kube-vip-ha-694782-m02" [036ef70f-0af1-42a5-b0bb-5622785ff031] Running
	I0415 23:58:17.824553   25488 system_pods.go:89] "kube-vip-ha-694782-m03" [fc934534-c2d6-4454-93e1-8d8e2b791c72] Running
	I0415 23:58:17.824560   25488 system_pods.go:89] "storage-provisioner" [bea9c166-5f83-473f-8f01-335ea1436dad] Running
	I0415 23:58:17.824570   25488 system_pods.go:126] duration metric: took 211.994917ms to wait for k8s-apps to be running ...
	I0415 23:58:17.824583   25488 system_svc.go:44] waiting for kubelet service to be running ....
	I0415 23:58:17.824637   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 23:58:17.842104   25488 system_svc.go:56] duration metric: took 17.514994ms WaitForService to wait for kubelet
	I0415 23:58:17.842131   25488 kubeadm.go:576] duration metric: took 18.309289878s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 23:58:17.842152   25488 node_conditions.go:102] verifying NodePressure condition ...
	I0415 23:58:18.008527   25488 request.go:629] Waited for 166.310678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes
	I0415 23:58:18.008611   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes
	I0415 23:58:18.008618   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:18.008629   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:18.008642   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:18.012474   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:18.013480   25488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 23:58:18.013502   25488 node_conditions.go:123] node cpu capacity is 2
	I0415 23:58:18.013514   25488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 23:58:18.013518   25488 node_conditions.go:123] node cpu capacity is 2
	I0415 23:58:18.013522   25488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 23:58:18.013525   25488 node_conditions.go:123] node cpu capacity is 2
	I0415 23:58:18.013529   25488 node_conditions.go:105] duration metric: took 171.372046ms to run NodePressure ...
	I0415 23:58:18.013541   25488 start.go:240] waiting for startup goroutines ...
	I0415 23:58:18.013564   25488 start.go:254] writing updated cluster config ...
	I0415 23:58:18.013821   25488 ssh_runner.go:195] Run: rm -f paused
	I0415 23:58:18.064114   25488 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0415 23:58:18.066256   25488 out.go:177] * Done! kubectl is now configured to use "ha-694782" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.213751807Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713225707213729369,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0bebd99-295e-499e-8f9f-aa227225b636 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.214479143Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fd1c3d7-1878-4cc3-9b68-9738e91c0c91 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.214550271Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fd1c3d7-1878-4cc3-9b68-9738e91c0c91 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.214757343Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10abaa8fc3a416f4f6e6af525fcc65e0613ea769d731660a81e4e6a425fa4d6c,PodSandboxId:df7bc8cc3af912521d7dab8c802c0b04f7447ccb3d192040071875ff6a6ed89d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713225502009847781,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kubernetes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d,PodSandboxId:773aba8a13222bacf0c0e79c78ec31764b5af16b9bc416140f303b36465cce2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225349858970939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2,PodSandboxId:cc571f90808ddcdef413b709640e27f67d9d861628a9d232886db9a496a57712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225349824904827,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22f9f76ea741300482eff1b6fe9db3f26a6e24069fb874c4ddd33c655294e62,PodSandboxId:d0206b8339037f202916be2337347086cc6265ba7391f3c217e691a994687c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1713225348406059422,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e00269a54857a2b49811e69012788039b429be3725a79bdb0a6e999aff448e,PodSandboxId:cf834489f460fbbaf59a25b280f8f70c16044f0f394b559e98c90fccc35d4837,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713225
346494199084,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7,PodSandboxId:f34915e87e4008b765d7b34d6619b29c22eddc157e2e96893518ff9709538560,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713225346210700164,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8c32adffdfe920d33a3d2aadb4e5d70c83321d5e0ed04b5e651b3338f8868c,PodSandboxId:d03541f025672fb33a16b0c006378393944751455fb18a91620e4822b1cf32da,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713225329553755138,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fb76c3bc27f5d0b4f45ad31d74d371,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5,PodSandboxId:41e04a0d8a0ba492c448f0c8d919cb86eb887cc0a8198d99815e7f7eed50b944,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713225326796796969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf,PodSandboxId:cc8f87bd6e0dc433462a51cd028d6d774aa14a6c762f3f6a79999daea3870547,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713225326704495869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler
-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d4ea2215ec6217956d87feb4c68ad8ace3136456a7bc720dcc7c721b87f66f4,PodSandboxId:886d00021f1d02da690c8d485521dfcbcd8e54b07e8b49c670f226a2a48b58ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713225326714963944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a682dce5ef12dd6c80bdebd8ef67c034ebd1c88d5e144fc177805ad5eb35efe,PodSandboxId:21503f860be6fcca82242fc07c3e7179eba1f673a404e7d4c668628e99247da5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713225326702363925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0fd1c3d7-1878-4cc3-9b68-9738e91c0c91 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.255459830Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d138a5f-d270-49d6-a32d-fb4d4ea07edc name=/runtime.v1.RuntimeService/Version
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.255556798Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d138a5f-d270-49d6-a32d-fb4d4ea07edc name=/runtime.v1.RuntimeService/Version
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.257491350Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=218f3594-9b17-4df9-8ea2-6705d0946a83 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.258268365Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713225707258236873,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=218f3594-9b17-4df9-8ea2-6705d0946a83 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.258824279Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55a523ab-7341-4e80-bfd1-7692bda8ef9d name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.258897940Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55a523ab-7341-4e80-bfd1-7692bda8ef9d name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.259270213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10abaa8fc3a416f4f6e6af525fcc65e0613ea769d731660a81e4e6a425fa4d6c,PodSandboxId:df7bc8cc3af912521d7dab8c802c0b04f7447ccb3d192040071875ff6a6ed89d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713225502009847781,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kubernetes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d,PodSandboxId:773aba8a13222bacf0c0e79c78ec31764b5af16b9bc416140f303b36465cce2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225349858970939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2,PodSandboxId:cc571f90808ddcdef413b709640e27f67d9d861628a9d232886db9a496a57712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225349824904827,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22f9f76ea741300482eff1b6fe9db3f26a6e24069fb874c4ddd33c655294e62,PodSandboxId:d0206b8339037f202916be2337347086cc6265ba7391f3c217e691a994687c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1713225348406059422,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e00269a54857a2b49811e69012788039b429be3725a79bdb0a6e999aff448e,PodSandboxId:cf834489f460fbbaf59a25b280f8f70c16044f0f394b559e98c90fccc35d4837,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713225
346494199084,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7,PodSandboxId:f34915e87e4008b765d7b34d6619b29c22eddc157e2e96893518ff9709538560,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713225346210700164,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8c32adffdfe920d33a3d2aadb4e5d70c83321d5e0ed04b5e651b3338f8868c,PodSandboxId:d03541f025672fb33a16b0c006378393944751455fb18a91620e4822b1cf32da,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713225329553755138,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fb76c3bc27f5d0b4f45ad31d74d371,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5,PodSandboxId:41e04a0d8a0ba492c448f0c8d919cb86eb887cc0a8198d99815e7f7eed50b944,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713225326796796969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf,PodSandboxId:cc8f87bd6e0dc433462a51cd028d6d774aa14a6c762f3f6a79999daea3870547,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713225326704495869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler
-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d4ea2215ec6217956d87feb4c68ad8ace3136456a7bc720dcc7c721b87f66f4,PodSandboxId:886d00021f1d02da690c8d485521dfcbcd8e54b07e8b49c670f226a2a48b58ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713225326714963944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a682dce5ef12dd6c80bdebd8ef67c034ebd1c88d5e144fc177805ad5eb35efe,PodSandboxId:21503f860be6fcca82242fc07c3e7179eba1f673a404e7d4c668628e99247da5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713225326702363925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55a523ab-7341-4e80-bfd1-7692bda8ef9d name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.302364616Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2e82dd1f-c4cd-45cc-98cf-2ffa374271ae name=/runtime.v1.RuntimeService/Version
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.302443039Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e82dd1f-c4cd-45cc-98cf-2ffa374271ae name=/runtime.v1.RuntimeService/Version
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.303799028Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a75a7ac1-25d2-4091-93a3-41636d55db83 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.304712689Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713225707304651962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a75a7ac1-25d2-4091-93a3-41636d55db83 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.305533127Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da775db6-38bb-41df-b579-502301ea35b9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.305587836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da775db6-38bb-41df-b579-502301ea35b9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.306233265Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10abaa8fc3a416f4f6e6af525fcc65e0613ea769d731660a81e4e6a425fa4d6c,PodSandboxId:df7bc8cc3af912521d7dab8c802c0b04f7447ccb3d192040071875ff6a6ed89d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713225502009847781,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kubernetes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d,PodSandboxId:773aba8a13222bacf0c0e79c78ec31764b5af16b9bc416140f303b36465cce2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225349858970939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2,PodSandboxId:cc571f90808ddcdef413b709640e27f67d9d861628a9d232886db9a496a57712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225349824904827,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22f9f76ea741300482eff1b6fe9db3f26a6e24069fb874c4ddd33c655294e62,PodSandboxId:d0206b8339037f202916be2337347086cc6265ba7391f3c217e691a994687c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1713225348406059422,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e00269a54857a2b49811e69012788039b429be3725a79bdb0a6e999aff448e,PodSandboxId:cf834489f460fbbaf59a25b280f8f70c16044f0f394b559e98c90fccc35d4837,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713225
346494199084,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7,PodSandboxId:f34915e87e4008b765d7b34d6619b29c22eddc157e2e96893518ff9709538560,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713225346210700164,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8c32adffdfe920d33a3d2aadb4e5d70c83321d5e0ed04b5e651b3338f8868c,PodSandboxId:d03541f025672fb33a16b0c006378393944751455fb18a91620e4822b1cf32da,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713225329553755138,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fb76c3bc27f5d0b4f45ad31d74d371,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5,PodSandboxId:41e04a0d8a0ba492c448f0c8d919cb86eb887cc0a8198d99815e7f7eed50b944,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713225326796796969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf,PodSandboxId:cc8f87bd6e0dc433462a51cd028d6d774aa14a6c762f3f6a79999daea3870547,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713225326704495869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler
-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d4ea2215ec6217956d87feb4c68ad8ace3136456a7bc720dcc7c721b87f66f4,PodSandboxId:886d00021f1d02da690c8d485521dfcbcd8e54b07e8b49c670f226a2a48b58ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713225326714963944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a682dce5ef12dd6c80bdebd8ef67c034ebd1c88d5e144fc177805ad5eb35efe,PodSandboxId:21503f860be6fcca82242fc07c3e7179eba1f673a404e7d4c668628e99247da5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713225326702363925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da775db6-38bb-41df-b579-502301ea35b9 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.347623040Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=062093e3-1af8-46bf-a1ef-7ba35b6705ab name=/runtime.v1.RuntimeService/Version
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.347700145Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=062093e3-1af8-46bf-a1ef-7ba35b6705ab name=/runtime.v1.RuntimeService/Version
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.348954186Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f8804f94-f876-46c0-baab-2e489e956f79 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.349460698Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713225707349435994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f8804f94-f876-46c0-baab-2e489e956f79 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.350098421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d27e318f-e4c9-434b-b07c-8900ba068634 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.350153501Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d27e318f-e4c9-434b-b07c-8900ba068634 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:01:47 ha-694782 crio[683]: time="2024-04-16 00:01:47.350385878Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10abaa8fc3a416f4f6e6af525fcc65e0613ea769d731660a81e4e6a425fa4d6c,PodSandboxId:df7bc8cc3af912521d7dab8c802c0b04f7447ccb3d192040071875ff6a6ed89d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713225502009847781,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kubernetes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d,PodSandboxId:773aba8a13222bacf0c0e79c78ec31764b5af16b9bc416140f303b36465cce2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225349858970939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2,PodSandboxId:cc571f90808ddcdef413b709640e27f67d9d861628a9d232886db9a496a57712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225349824904827,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22f9f76ea741300482eff1b6fe9db3f26a6e24069fb874c4ddd33c655294e62,PodSandboxId:d0206b8339037f202916be2337347086cc6265ba7391f3c217e691a994687c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1713225348406059422,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e00269a54857a2b49811e69012788039b429be3725a79bdb0a6e999aff448e,PodSandboxId:cf834489f460fbbaf59a25b280f8f70c16044f0f394b559e98c90fccc35d4837,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713225
346494199084,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7,PodSandboxId:f34915e87e4008b765d7b34d6619b29c22eddc157e2e96893518ff9709538560,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713225346210700164,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8c32adffdfe920d33a3d2aadb4e5d70c83321d5e0ed04b5e651b3338f8868c,PodSandboxId:d03541f025672fb33a16b0c006378393944751455fb18a91620e4822b1cf32da,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713225329553755138,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fb76c3bc27f5d0b4f45ad31d74d371,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5,PodSandboxId:41e04a0d8a0ba492c448f0c8d919cb86eb887cc0a8198d99815e7f7eed50b944,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713225326796796969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf,PodSandboxId:cc8f87bd6e0dc433462a51cd028d6d774aa14a6c762f3f6a79999daea3870547,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713225326704495869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler
-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d4ea2215ec6217956d87feb4c68ad8ace3136456a7bc720dcc7c721b87f66f4,PodSandboxId:886d00021f1d02da690c8d485521dfcbcd8e54b07e8b49c670f226a2a48b58ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713225326714963944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a682dce5ef12dd6c80bdebd8ef67c034ebd1c88d5e144fc177805ad5eb35efe,PodSandboxId:21503f860be6fcca82242fc07c3e7179eba1f673a404e7d4c668628e99247da5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713225326702363925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d27e318f-e4c9-434b-b07c-8900ba068634 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	10abaa8fc3a41       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   df7bc8cc3af91       busybox-7fdf7869d9-vsvrq
	a62edf63e9633       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   773aba8a13222       coredns-76f75df574-zdc8q
	b3a501d70f72c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   0                   cc571f90808dd       coredns-76f75df574-4sgv4
	c22f9f76ea741       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   d0206b8339037       storage-provisioner
	33e00269a5485       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      6 minutes ago       Running             kindnet-cni               0                   cf834489f460f       kindnet-99cs7
	b55cb00c20162       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      6 minutes ago       Running             kube-proxy                0                   f34915e87e400       kube-proxy-d46v5
	9f8c32adffdfe       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Running             kube-vip                  0                   d03541f025672       kube-vip-ha-694782
	9d17ec84664ef       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   41e04a0d8a0ba       etcd-ha-694782
	7d4ea2215ec62       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      6 minutes ago       Running             kube-apiserver            0                   886d00021f1d0       kube-apiserver-ha-694782
	553d7f07f43e6       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      6 minutes ago       Running             kube-scheduler            0                   cc8f87bd6e0dc       kube-scheduler-ha-694782
	8a682dce5ef12       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      6 minutes ago       Running             kube-controller-manager   0                   21503f860be6f       kube-controller-manager-ha-694782
	
	
	==> coredns [a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d] <==
	[INFO] 10.244.0.4:43820 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171346s
	[INFO] 10.244.0.4:53971 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000064209s
	[INFO] 10.244.0.4:58655 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000057875s
	[INFO] 10.244.1.2:57138 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000292175s
	[INFO] 10.244.1.2:42990 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.007959183s
	[INFO] 10.244.1.2:53242 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142606s
	[INFO] 10.244.1.2:53591 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169859s
	[INFO] 10.244.2.2:56926 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001802242s
	[INFO] 10.244.2.2:55053 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174333s
	[INFO] 10.244.2.2:56210 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000166019s
	[INFO] 10.244.2.2:36533 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001257882s
	[INFO] 10.244.0.4:39112 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127586s
	[INFO] 10.244.0.4:33597 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001242421s
	[INFO] 10.244.0.4:37595 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130691s
	[INFO] 10.244.0.4:36939 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000030566s
	[INFO] 10.244.0.4:36468 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043404s
	[INFO] 10.244.1.2:46854 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000237116s
	[INFO] 10.244.1.2:35618 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139683s
	[INFO] 10.244.2.2:54137 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000211246s
	[INFO] 10.244.2.2:57833 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097841s
	[INFO] 10.244.0.4:45317 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099201s
	[INFO] 10.244.1.2:46870 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000160521s
	[INFO] 10.244.1.2:49971 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118112s
	[INFO] 10.244.2.2:60977 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000163482s
	[INFO] 10.244.0.4:57367 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078337s
	
	
	==> coredns [b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2] <==
	[INFO] 10.244.1.2:51264 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003372306s
	[INFO] 10.244.1.2:40116 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000306198s
	[INFO] 10.244.1.2:43171 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166993s
	[INFO] 10.244.2.2:55011 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111047s
	[INFO] 10.244.2.2:60878 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096803s
	[INFO] 10.244.2.2:40329 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153524s
	[INFO] 10.244.2.2:43908 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109424s
	[INFO] 10.244.0.4:40588 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117575s
	[INFO] 10.244.0.4:34558 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001805219s
	[INFO] 10.244.0.4:44168 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194119s
	[INFO] 10.244.1.2:54750 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108471s
	[INFO] 10.244.1.2:46261 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008003s
	[INFO] 10.244.2.2:53899 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130847s
	[INFO] 10.244.2.2:52030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082631s
	[INFO] 10.244.0.4:39295 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000069381s
	[INFO] 10.244.0.4:38441 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054252s
	[INFO] 10.244.0.4:40273 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054634s
	[INFO] 10.244.1.2:56481 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181468s
	[INFO] 10.244.1.2:34800 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000244392s
	[INFO] 10.244.2.2:40684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136775s
	[INFO] 10.244.2.2:50964 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000154855s
	[INFO] 10.244.2.2:46132 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089888s
	[INFO] 10.244.0.4:34246 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124283s
	[INFO] 10.244.0.4:53924 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125381s
	[INFO] 10.244.0.4:36636 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000079286s
	
	
	==> describe nodes <==
	Name:               ha-694782
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-694782
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=ha-694782
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T23_55_34_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 23:55:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-694782
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:01:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 23:58:37 +0000   Mon, 15 Apr 2024 23:55:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 23:58:37 +0000   Mon, 15 Apr 2024 23:55:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 23:58:37 +0000   Mon, 15 Apr 2024 23:55:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 23:58:37 +0000   Mon, 15 Apr 2024 23:55:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    ha-694782
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3887d262ea345b0b06d0cfe81d3c704
	  System UUID:                e3887d26-2ea3-45b0-b06d-0cfe81d3c704
	  Boot ID:                    db04bec2-a6d7-4f51-8173-a431f51db6a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-vsvrq             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 coredns-76f75df574-4sgv4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m2s
	  kube-system                 coredns-76f75df574-zdc8q             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m2s
	  kube-system                 etcd-ha-694782                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m14s
	  kube-system                 kindnet-99cs7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m2s
	  kube-system                 kube-apiserver-ha-694782             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-controller-manager-ha-694782    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-proxy-d46v5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 kube-scheduler-ha-694782             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-vip-ha-694782                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m                     kube-proxy       
	  Normal  NodeHasSufficientPID     6m21s (x7 over 6m21s)  kubelet          Node ha-694782 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m21s (x8 over 6m21s)  kubelet          Node ha-694782 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s (x8 over 6m21s)  kubelet          Node ha-694782 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m14s                  kubelet          Node ha-694782 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m14s                  kubelet          Node ha-694782 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m14s                  kubelet          Node ha-694782 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m3s                   node-controller  Node ha-694782 event: Registered Node ha-694782 in Controller
	  Normal  NodeReady                6m                     kubelet          Node ha-694782 status is now: NodeReady
	  Normal  RegisteredNode           4m44s                  node-controller  Node ha-694782 event: Registered Node ha-694782 in Controller
	  Normal  RegisteredNode           3m36s                  node-controller  Node ha-694782 event: Registered Node ha-694782 in Controller
	
	
	Name:               ha-694782-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-694782-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=ha-694782
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_15T23_56_49_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 23:56:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-694782-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 23:59:18 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 15 Apr 2024 23:58:48 +0000   Tue, 16 Apr 2024 00:00:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 15 Apr 2024 23:58:48 +0000   Tue, 16 Apr 2024 00:00:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 15 Apr 2024 23:58:48 +0000   Tue, 16 Apr 2024 00:00:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 15 Apr 2024 23:58:48 +0000   Tue, 16 Apr 2024 00:00:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.42
	  Hostname:    ha-694782-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f33e7ca96e8a461196cc015dc9cdb390
	  System UUID:                f33e7ca9-6e8a-4611-96cc-015dc9cdb390
	  Boot ID:                    5d204aeb-b0bb-47ab-8d6e-e6870264d97b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-bwtdm                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 etcd-ha-694782-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m1s
	  kube-system                 kindnet-qvp8b                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m2s
	  kube-system                 kube-apiserver-ha-694782-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-controller-manager-ha-694782-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-proxy-vbfhn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-scheduler-ha-694782-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-vip-ha-694782-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m2s (x8 over 5m2s)  kubelet          Node ha-694782-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m2s (x8 over 5m2s)  kubelet          Node ha-694782-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m2s (x7 over 5m2s)  kubelet          Node ha-694782-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m57s                node-controller  Node ha-694782-m02 event: Registered Node ha-694782-m02 in Controller
	  Normal  RegisteredNode           4m44s                node-controller  Node ha-694782-m02 event: Registered Node ha-694782-m02 in Controller
	  Normal  RegisteredNode           3m36s                node-controller  Node ha-694782-m02 event: Registered Node ha-694782-m02 in Controller
	  Normal  NodeNotReady             107s                 node-controller  Node ha-694782-m02 status is now: NodeNotReady
	
	
	Name:               ha-694782-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-694782-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=ha-694782
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_15T23_57_59_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 23:57:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-694782-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:01:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 23:58:24 +0000   Mon, 15 Apr 2024 23:57:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 23:58:24 +0000   Mon, 15 Apr 2024 23:57:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 23:58:24 +0000   Mon, 15 Apr 2024 23:57:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 23:58:24 +0000   Mon, 15 Apr 2024 23:58:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.202
	  Hostname:    ha-694782-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7c9ed2323d6414d9d68e5da836956f9
	  System UUID:                f7c9ed23-23d6-414d-9d68-e5da836956f9
	  Boot ID:                    f0a011f0-aa05-4a51-9cac-8a89ff51f5fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-mxz6n                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 etcd-ha-694782-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m51s
	  kube-system                 kindnet-hln6n                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m54s
	  kube-system                 kube-apiserver-ha-694782-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-controller-manager-ha-694782-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 kube-proxy-45tb9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kube-scheduler-ha-694782-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-vip-ha-694782-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m50s                  kube-proxy       
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-694782-m03 event: Registered Node ha-694782-m03 in Controller
	  Normal  NodeHasSufficientMemory  3m54s (x8 over 3m54s)  kubelet          Node ha-694782-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x8 over 3m54s)  kubelet          Node ha-694782-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x7 over 3m54s)  kubelet          Node ha-694782-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-694782-m03 event: Registered Node ha-694782-m03 in Controller
	  Normal  RegisteredNode           3m36s                  node-controller  Node ha-694782-m03 event: Registered Node ha-694782-m03 in Controller
	
	
	Name:               ha-694782-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-694782-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=ha-694782
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_15T23_58_56_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 23:58:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-694782-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:01:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 23:59:26 +0000   Mon, 15 Apr 2024 23:58:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 23:59:26 +0000   Mon, 15 Apr 2024 23:58:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 23:59:26 +0000   Mon, 15 Apr 2024 23:58:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 23:59:26 +0000   Mon, 15 Apr 2024 23:59:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    ha-694782-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aceb25acc5a84fcca647b3b66273edbd
	  System UUID:                aceb25ac-c5a8-4fcc-a647-b3b66273edbd
	  Boot ID:                    43559327-63c3-4af4-bb7f-6d674d6e1c03
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-k6vbr       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m51s
	  kube-system                 kube-proxy-mgwnv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m52s (x2 over 2m52s)  kubelet          Node ha-694782-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s (x2 over 2m52s)  kubelet          Node ha-694782-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s (x2 over 2m52s)  kubelet          Node ha-694782-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller
	  Normal  NodeAllocatableEnforced  2m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller
	  Normal  NodeReady                2m41s                  kubelet          Node ha-694782-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr15 23:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052003] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040460] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.533828] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Apr15 23:55] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.613458] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.744931] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.056422] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063543] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.160170] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.142019] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.294658] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.386960] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.057175] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.857368] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +1.229656] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.643467] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.095801] kauditd_printk_skb: 40 callbacks suppressed
	[ +12.797173] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.821758] kauditd_printk_skb: 72 callbacks suppressed
	
	
	==> etcd [9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5] <==
	{"level":"warn","ts":"2024-04-16T00:01:47.638348Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.648337Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.653061Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.670041Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.680059Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.688658Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.692502Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.696545Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.706956Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.71358Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.721188Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.725223Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.729952Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.732174Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.742175Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.74915Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.757163Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.760565Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.765726Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.771865Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.777892Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.787199Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.801262Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.829867Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:01:47.863789Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:01:47 up 6 min,  0 users,  load average: 0.15, 0.27, 0.16
	Linux ha-694782 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [33e00269a54857a2b49811e69012788039b429be3725a79bdb0a6e999aff448e] <==
	I0416 00:01:17.840224       1 main.go:250] Node ha-694782-m04 has CIDR [10.244.3.0/24] 
	I0416 00:01:27.847755       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0416 00:01:27.847840       1 main.go:227] handling current node
	I0416 00:01:27.847875       1 main.go:223] Handling node with IPs: map[192.168.39.42:{}]
	I0416 00:01:27.847903       1 main.go:250] Node ha-694782-m02 has CIDR [10.244.1.0/24] 
	I0416 00:01:27.848126       1 main.go:223] Handling node with IPs: map[192.168.39.202:{}]
	I0416 00:01:27.848169       1 main.go:250] Node ha-694782-m03 has CIDR [10.244.2.0/24] 
	I0416 00:01:27.848254       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0416 00:01:27.848274       1 main.go:250] Node ha-694782-m04 has CIDR [10.244.3.0/24] 
	I0416 00:01:37.861220       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0416 00:01:37.861418       1 main.go:227] handling current node
	I0416 00:01:37.861472       1 main.go:223] Handling node with IPs: map[192.168.39.42:{}]
	I0416 00:01:37.861495       1 main.go:250] Node ha-694782-m02 has CIDR [10.244.1.0/24] 
	I0416 00:01:37.861624       1 main.go:223] Handling node with IPs: map[192.168.39.202:{}]
	I0416 00:01:37.861645       1 main.go:250] Node ha-694782-m03 has CIDR [10.244.2.0/24] 
	I0416 00:01:37.861792       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0416 00:01:37.861819       1 main.go:250] Node ha-694782-m04 has CIDR [10.244.3.0/24] 
	I0416 00:01:47.867341       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0416 00:01:47.867367       1 main.go:227] handling current node
	I0416 00:01:47.867376       1 main.go:223] Handling node with IPs: map[192.168.39.42:{}]
	I0416 00:01:47.867381       1 main.go:250] Node ha-694782-m02 has CIDR [10.244.1.0/24] 
	I0416 00:01:47.867476       1 main.go:223] Handling node with IPs: map[192.168.39.202:{}]
	I0416 00:01:47.867481       1 main.go:250] Node ha-694782-m03 has CIDR [10.244.2.0/24] 
	I0416 00:01:47.867533       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0416 00:01:47.867541       1 main.go:250] Node ha-694782-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7d4ea2215ec6217956d87feb4c68ad8ace3136456a7bc720dcc7c721b87f66f4] <==
	I0415 23:55:29.670734       1 shared_informer.go:318] Caches are synced for configmaps
	I0415 23:55:29.670878       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0415 23:55:29.670945       1 aggregator.go:165] initial CRD sync complete...
	I0415 23:55:29.670952       1 autoregister_controller.go:141] Starting autoregister controller
	I0415 23:55:29.670956       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0415 23:55:29.670960       1 cache.go:39] Caches are synced for autoregister controller
	I0415 23:55:29.672331       1 controller.go:624] quota admission added evaluator for: namespaces
	E0415 23:55:29.812092       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0415 23:55:29.812727       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0415 23:55:29.918840       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0415 23:55:30.572594       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0415 23:55:30.578644       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0415 23:55:30.579306       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0415 23:55:31.198256       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0415 23:55:31.246337       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0415 23:55:31.419633       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0415 23:55:31.427274       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.41]
	I0415 23:55:31.428318       1 controller.go:624] quota admission added evaluator for: endpoints
	I0415 23:55:31.432531       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0415 23:55:31.663618       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0415 23:55:33.380844       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0415 23:55:33.398693       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0415 23:55:33.417401       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0415 23:55:45.411712       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0415 23:55:45.621074       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8a682dce5ef12dd6c80bdebd8ef67c034ebd1c88d5e144fc177805ad5eb35efe] <==
	I0415 23:58:23.762552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="12.256023ms"
	I0415 23:58:23.762667       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="52.183µs"
	I0415 23:58:56.024795       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-694782-m04\" does not exist"
	I0415 23:58:56.067843       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mgwnv"
	I0415 23:58:56.074898       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k6vbr"
	I0415 23:58:56.081101       1 range_allocator.go:380] "Set node PodCIDR" node="ha-694782-m04" podCIDRs=["10.244.3.0/24"]
	I0415 23:58:56.197877       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-2brgt"
	I0415 23:58:56.212568       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-h2nvb"
	I0415 23:58:56.294753       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-gh9x2"
	I0415 23:58:56.321809       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-8kkvs"
	I0415 23:59:00.140788       1 event.go:376] "Event occurred" object="ha-694782-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller"
	I0415 23:59:00.155111       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-694782-m04"
	I0415 23:59:06.052830       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-694782-m04"
	I0416 00:00:00.182615       1 event.go:376] "Event occurred" object="ha-694782-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-694782-m02 status is now: NodeNotReady"
	I0416 00:00:00.183530       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-694782-m04"
	I0416 00:00:00.210373       1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager-ha-694782-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:00:00.224910       1 event.go:376] "Event occurred" object="kube-system/kube-scheduler-ha-694782-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:00:00.235421       1 event.go:376] "Event occurred" object="kube-system/kube-apiserver-ha-694782-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:00:00.250267       1 event.go:376] "Event occurred" object="kube-system/etcd-ha-694782-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:00:00.267440       1 event.go:376] "Event occurred" object="kube-system/kube-vip-ha-694782-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:00:00.279200       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-bwtdm" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:00:00.298614       1 event.go:376] "Event occurred" object="kube-system/kindnet-qvp8b" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:00:00.321143       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-vbfhn" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:00:00.323713       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="44.017204ms"
	I0416 00:00:00.323965       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="108.433µs"
	
	
	==> kube-proxy [b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7] <==
	I0415 23:55:46.624679       1 server_others.go:72] "Using iptables proxy"
	I0415 23:55:46.653543       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.41"]
	I0415 23:55:46.725353       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0415 23:55:46.725372       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0415 23:55:46.725384       1 server_others.go:168] "Using iptables Proxier"
	I0415 23:55:46.730419       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 23:55:46.730581       1 server.go:865] "Version info" version="v1.29.3"
	I0415 23:55:46.730600       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 23:55:46.732809       1 config.go:188] "Starting service config controller"
	I0415 23:55:46.733116       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 23:55:46.733144       1 config.go:97] "Starting endpoint slice config controller"
	I0415 23:55:46.733149       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 23:55:46.733951       1 config.go:315] "Starting node config controller"
	I0415 23:55:46.733961       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 23:55:46.835103       1 shared_informer.go:318] Caches are synced for node config
	I0415 23:55:46.835137       1 shared_informer.go:318] Caches are synced for service config
	I0415 23:55:46.835157       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf] <==
	W0415 23:55:30.600571       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 23:55:30.600595       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0415 23:55:30.642082       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0415 23:55:30.642208       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0415 23:55:30.655852       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0415 23:55:30.656108       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0415 23:55:30.722714       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0415 23:55:30.722781       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0415 23:55:30.757758       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 23:55:30.757863       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0415 23:55:30.786379       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0415 23:55:30.786428       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0415 23:55:30.858574       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 23:55:30.858603       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0415 23:55:30.892647       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0415 23:55:30.892743       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0415 23:55:32.669448       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0415 23:57:53.766559       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hlb2j\": pod kube-proxy-hlb2j is already assigned to node \"ha-694782-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hlb2j" node="ha-694782-m03"
	E0415 23:57:53.766722       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod f47bcc6b-e6f8-4a54-8ac4-67c188acf8aa(kube-system/kube-proxy-hlb2j) wasn't assumed so cannot be forgotten"
	E0415 23:57:53.766801       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hlb2j\": pod kube-proxy-hlb2j is already assigned to node \"ha-694782-m03\"" pod="kube-system/kube-proxy-hlb2j"
	I0415 23:57:53.766864       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hlb2j" node="ha-694782-m03"
	E0415 23:58:56.095615       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mgwnv\": pod kube-proxy-mgwnv is already assigned to node \"ha-694782-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mgwnv" node="ha-694782-m04"
	E0415 23:58:56.095678       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod af3ebac5-b22c-4783-83f7-63f1b57b9f86(kube-system/kube-proxy-mgwnv) wasn't assumed so cannot be forgotten"
	E0415 23:58:56.095707       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mgwnv\": pod kube-proxy-mgwnv is already assigned to node \"ha-694782-m04\"" pod="kube-system/kube-proxy-mgwnv"
	I0415 23:58:56.095723       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-mgwnv" node="ha-694782-m04"
	
	
	==> kubelet <==
	Apr 15 23:57:33 ha-694782 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 23:57:33 ha-694782 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 23:58:19 ha-694782 kubelet[1369]: I0415 23:58:19.003175    1369 topology_manager.go:215] "Topology Admit Handler" podUID="d510538f-3535-428b-8933-e3d6de6777eb" podNamespace="default" podName="busybox-7fdf7869d9-vsvrq"
	Apr 15 23:58:19 ha-694782 kubelet[1369]: I0415 23:58:19.149528    1369 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hj92\" (UniqueName: \"kubernetes.io/projected/d510538f-3535-428b-8933-e3d6de6777eb-kube-api-access-2hj92\") pod \"busybox-7fdf7869d9-vsvrq\" (UID: \"d510538f-3535-428b-8933-e3d6de6777eb\") " pod="default/busybox-7fdf7869d9-vsvrq"
	Apr 15 23:58:22 ha-694782 kubelet[1369]: I0415 23:58:22.433512    1369 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-7fdf7869d9-vsvrq" podStartSLOduration=1.991875177 podStartE2EDuration="4.43343242s" podCreationTimestamp="2024-04-15 23:58:18 +0000 UTC" firstStartedPulling="2024-04-15 23:58:19.549333261 +0000 UTC m=+166.197827100" lastFinishedPulling="2024-04-15 23:58:21.990890504 +0000 UTC m=+168.639384343" observedRunningTime="2024-04-15 23:58:22.432896588 +0000 UTC m=+169.081390433" watchObservedRunningTime="2024-04-15 23:58:22.43343242 +0000 UTC m=+169.081926279"
	Apr 15 23:58:33 ha-694782 kubelet[1369]: E0415 23:58:33.657345    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 23:58:33 ha-694782 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 23:58:33 ha-694782 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 23:58:33 ha-694782 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 23:58:33 ha-694782 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 23:59:33 ha-694782 kubelet[1369]: E0415 23:59:33.657638    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 23:59:33 ha-694782 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 23:59:33 ha-694782 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 23:59:33 ha-694782 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 23:59:33 ha-694782 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 00:00:33 ha-694782 kubelet[1369]: E0416 00:00:33.657855    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 00:00:33 ha-694782 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 00:00:33 ha-694782 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 00:00:33 ha-694782 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 00:00:33 ha-694782 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 00:01:33 ha-694782 kubelet[1369]: E0416 00:01:33.656753    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 00:01:33 ha-694782 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 00:01:33 ha-694782 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 00:01:33 ha-694782 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 00:01:33 ha-694782 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-694782 -n ha-694782
helpers_test.go:261: (dbg) Run:  kubectl --context ha-694782 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (60.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr: exit status 3 (3.198602994s)

                                                
                                                
-- stdout --
	ha-694782
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-694782-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:01:52.435283   30229 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:01:52.435378   30229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:01:52.435383   30229 out.go:304] Setting ErrFile to fd 2...
	I0416 00:01:52.435386   30229 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:01:52.435606   30229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:01:52.435829   30229 out.go:298] Setting JSON to false
	I0416 00:01:52.435870   30229 mustload.go:65] Loading cluster: ha-694782
	I0416 00:01:52.435908   30229 notify.go:220] Checking for updates...
	I0416 00:01:52.436403   30229 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:01:52.436427   30229 status.go:255] checking status of ha-694782 ...
	I0416 00:01:52.436878   30229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:52.436916   30229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:52.455406   30229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35901
	I0416 00:01:52.455856   30229 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:52.456456   30229 main.go:141] libmachine: Using API Version  1
	I0416 00:01:52.456489   30229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:52.456840   30229 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:52.457043   30229 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0416 00:01:52.458737   30229 status.go:330] ha-694782 host status = "Running" (err=<nil>)
	I0416 00:01:52.458754   30229 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:01:52.459066   30229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:52.459104   30229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:52.473906   30229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38841
	I0416 00:01:52.474361   30229 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:52.474837   30229 main.go:141] libmachine: Using API Version  1
	I0416 00:01:52.474859   30229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:52.475213   30229 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:52.475424   30229 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0416 00:01:52.478552   30229 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:01:52.479121   30229 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:01:52.479156   30229 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:01:52.479258   30229 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:01:52.479542   30229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:52.479585   30229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:52.494899   30229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33017
	I0416 00:01:52.495331   30229 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:52.495729   30229 main.go:141] libmachine: Using API Version  1
	I0416 00:01:52.495752   30229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:52.496118   30229 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:52.496331   30229 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:01:52.496559   30229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:01:52.496594   30229 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:01:52.499973   30229 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:01:52.500491   30229 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:01:52.500519   30229 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:01:52.500685   30229 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:01:52.500875   30229 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:01:52.501052   30229 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:01:52.501242   30229 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0416 00:01:52.589371   30229 ssh_runner.go:195] Run: systemctl --version
	I0416 00:01:52.595726   30229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:01:52.612281   30229 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:01:52.612315   30229 api_server.go:166] Checking apiserver status ...
	I0416 00:01:52.612351   30229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:01:52.628105   30229 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup
	W0416 00:01:52.638708   30229 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:01:52.638756   30229 ssh_runner.go:195] Run: ls
	I0416 00:01:52.643181   30229 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:01:52.647479   30229 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:01:52.647502   30229 status.go:422] ha-694782 apiserver status = Running (err=<nil>)
	I0416 00:01:52.647514   30229 status.go:257] ha-694782 status: &{Name:ha-694782 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:01:52.647540   30229 status.go:255] checking status of ha-694782-m02 ...
	I0416 00:01:52.647810   30229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:52.647839   30229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:52.663957   30229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46767
	I0416 00:01:52.664465   30229 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:52.665014   30229 main.go:141] libmachine: Using API Version  1
	I0416 00:01:52.665039   30229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:52.665390   30229 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:52.665580   30229 main.go:141] libmachine: (ha-694782-m02) Calling .GetState
	I0416 00:01:52.667356   30229 status.go:330] ha-694782-m02 host status = "Running" (err=<nil>)
	I0416 00:01:52.667378   30229 host.go:66] Checking if "ha-694782-m02" exists ...
	I0416 00:01:52.667657   30229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:52.667689   30229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:52.683447   30229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44749
	I0416 00:01:52.683794   30229 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:52.684302   30229 main.go:141] libmachine: Using API Version  1
	I0416 00:01:52.684326   30229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:52.684620   30229 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:52.684794   30229 main.go:141] libmachine: (ha-694782-m02) Calling .GetIP
	I0416 00:01:52.687646   30229 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:01:52.688099   30229 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0416 00:01:52.688143   30229 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:01:52.688389   30229 host.go:66] Checking if "ha-694782-m02" exists ...
	I0416 00:01:52.688735   30229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:52.688772   30229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:52.703751   30229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44739
	I0416 00:01:52.704180   30229 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:52.704661   30229 main.go:141] libmachine: Using API Version  1
	I0416 00:01:52.704676   30229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:52.705014   30229 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:52.705217   30229 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0416 00:01:52.705408   30229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:01:52.705432   30229 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0416 00:01:52.708352   30229 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:01:52.708733   30229 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0416 00:01:52.708751   30229 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:01:52.708870   30229 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0416 00:01:52.709031   30229 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0416 00:01:52.709204   30229 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0416 00:01:52.709352   30229 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa Username:docker}
	W0416 00:01:55.225410   30229 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.42:22: connect: no route to host
	W0416 00:01:55.225536   30229 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.42:22: connect: no route to host
	E0416 00:01:55.225561   30229 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.42:22: connect: no route to host
	I0416 00:01:55.225575   30229 status.go:257] ha-694782-m02 status: &{Name:ha-694782-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0416 00:01:55.225597   30229 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.42:22: connect: no route to host
	I0416 00:01:55.225610   30229 status.go:255] checking status of ha-694782-m03 ...
	I0416 00:01:55.226115   30229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:55.226176   30229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:55.240908   30229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40635
	I0416 00:01:55.241417   30229 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:55.241866   30229 main.go:141] libmachine: Using API Version  1
	I0416 00:01:55.241893   30229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:55.242209   30229 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:55.242410   30229 main.go:141] libmachine: (ha-694782-m03) Calling .GetState
	I0416 00:01:55.243942   30229 status.go:330] ha-694782-m03 host status = "Running" (err=<nil>)
	I0416 00:01:55.243954   30229 host.go:66] Checking if "ha-694782-m03" exists ...
	I0416 00:01:55.244256   30229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:55.244293   30229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:55.258517   30229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40495
	I0416 00:01:55.258879   30229 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:55.259396   30229 main.go:141] libmachine: Using API Version  1
	I0416 00:01:55.259420   30229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:55.259758   30229 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:55.259969   30229 main.go:141] libmachine: (ha-694782-m03) Calling .GetIP
	I0416 00:01:55.262543   30229 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:01:55.262962   30229 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0416 00:01:55.262993   30229 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:01:55.263094   30229 host.go:66] Checking if "ha-694782-m03" exists ...
	I0416 00:01:55.263391   30229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:55.263438   30229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:55.278898   30229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44957
	I0416 00:01:55.279384   30229 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:55.279894   30229 main.go:141] libmachine: Using API Version  1
	I0416 00:01:55.279921   30229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:55.280329   30229 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:55.280568   30229 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0416 00:01:55.280781   30229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:01:55.280805   30229 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0416 00:01:55.283805   30229 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:01:55.284228   30229 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0416 00:01:55.284256   30229 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:01:55.284483   30229 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0416 00:01:55.284676   30229 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0416 00:01:55.284858   30229 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0416 00:01:55.285016   30229 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa Username:docker}
	I0416 00:01:55.365011   30229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:01:55.380864   30229 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:01:55.380891   30229 api_server.go:166] Checking apiserver status ...
	I0416 00:01:55.380921   30229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:01:55.397079   30229 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1540/cgroup
	W0416 00:01:55.411012   30229 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1540/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:01:55.411059   30229 ssh_runner.go:195] Run: ls
	I0416 00:01:55.415691   30229 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:01:55.421687   30229 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:01:55.421709   30229 status.go:422] ha-694782-m03 apiserver status = Running (err=<nil>)
	I0416 00:01:55.421717   30229 status.go:257] ha-694782-m03 status: &{Name:ha-694782-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:01:55.421732   30229 status.go:255] checking status of ha-694782-m04 ...
	I0416 00:01:55.422058   30229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:55.422092   30229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:55.436829   30229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33735
	I0416 00:01:55.437258   30229 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:55.437706   30229 main.go:141] libmachine: Using API Version  1
	I0416 00:01:55.437729   30229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:55.438056   30229 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:55.438250   30229 main.go:141] libmachine: (ha-694782-m04) Calling .GetState
	I0416 00:01:55.439863   30229 status.go:330] ha-694782-m04 host status = "Running" (err=<nil>)
	I0416 00:01:55.439880   30229 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:01:55.440143   30229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:55.440189   30229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:55.454532   30229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34969
	I0416 00:01:55.454874   30229 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:55.455312   30229 main.go:141] libmachine: Using API Version  1
	I0416 00:01:55.455333   30229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:55.455608   30229 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:55.455775   30229 main.go:141] libmachine: (ha-694782-m04) Calling .GetIP
	I0416 00:01:55.458430   30229 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:01:55.458876   30229 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:58:43 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:01:55.458911   30229 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:01:55.459061   30229 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:01:55.459338   30229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:55.459371   30229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:55.474012   30229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42813
	I0416 00:01:55.474434   30229 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:55.474864   30229 main.go:141] libmachine: Using API Version  1
	I0416 00:01:55.474888   30229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:55.475205   30229 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:55.475391   30229 main.go:141] libmachine: (ha-694782-m04) Calling .DriverName
	I0416 00:01:55.475566   30229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:01:55.475587   30229 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHHostname
	I0416 00:01:55.478271   30229 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:01:55.478716   30229 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:58:43 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:01:55.478750   30229 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:01:55.478937   30229 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHPort
	I0416 00:01:55.479123   30229 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHKeyPath
	I0416 00:01:55.479299   30229 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHUsername
	I0416 00:01:55.479448   30229 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m04/id_rsa Username:docker}
	I0416 00:01:55.562702   30229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:01:55.580908   30229 status.go:257] ha-694782-m04 status: &{Name:ha-694782-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr: exit status 3 (4.985159966s)

                                                
                                                
-- stdout --
	ha-694782
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-694782-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:01:56.791497   30330 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:01:56.791611   30330 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:01:56.791624   30330 out.go:304] Setting ErrFile to fd 2...
	I0416 00:01:56.791629   30330 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:01:56.792351   30330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:01:56.792643   30330 out.go:298] Setting JSON to false
	I0416 00:01:56.792672   30330 mustload.go:65] Loading cluster: ha-694782
	I0416 00:01:56.792869   30330 notify.go:220] Checking for updates...
	I0416 00:01:56.793603   30330 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:01:56.793625   30330 status.go:255] checking status of ha-694782 ...
	I0416 00:01:56.794015   30330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:56.794060   30330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:56.808719   30330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46083
	I0416 00:01:56.809141   30330 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:56.809795   30330 main.go:141] libmachine: Using API Version  1
	I0416 00:01:56.809819   30330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:56.810188   30330 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:56.810566   30330 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0416 00:01:56.812224   30330 status.go:330] ha-694782 host status = "Running" (err=<nil>)
	I0416 00:01:56.812241   30330 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:01:56.812565   30330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:56.812602   30330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:56.827576   30330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39839
	I0416 00:01:56.827979   30330 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:56.828453   30330 main.go:141] libmachine: Using API Version  1
	I0416 00:01:56.828473   30330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:56.828769   30330 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:56.828971   30330 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0416 00:01:56.831783   30330 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:01:56.832171   30330 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:01:56.832199   30330 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:01:56.832327   30330 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:01:56.832721   30330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:56.832770   30330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:56.847714   30330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37071
	I0416 00:01:56.848133   30330 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:56.848682   30330 main.go:141] libmachine: Using API Version  1
	I0416 00:01:56.848708   30330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:56.849056   30330 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:56.849253   30330 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:01:56.849443   30330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:01:56.849487   30330 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:01:56.852233   30330 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:01:56.852619   30330 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:01:56.852641   30330 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:01:56.852811   30330 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:01:56.852984   30330 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:01:56.853145   30330 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:01:56.853331   30330 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0416 00:01:56.941241   30330 ssh_runner.go:195] Run: systemctl --version
	I0416 00:01:56.948901   30330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:01:56.964263   30330 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:01:56.964292   30330 api_server.go:166] Checking apiserver status ...
	I0416 00:01:56.964327   30330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:01:56.978500   30330 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup
	W0416 00:01:56.989046   30330 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:01:56.989098   30330 ssh_runner.go:195] Run: ls
	I0416 00:01:56.993455   30330 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:01:56.998084   30330 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:01:56.998103   30330 status.go:422] ha-694782 apiserver status = Running (err=<nil>)
	I0416 00:01:56.998119   30330 status.go:257] ha-694782 status: &{Name:ha-694782 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:01:56.998134   30330 status.go:255] checking status of ha-694782-m02 ...
	I0416 00:01:56.998416   30330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:56.998447   30330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:57.012656   30330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41623
	I0416 00:01:57.013063   30330 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:57.013528   30330 main.go:141] libmachine: Using API Version  1
	I0416 00:01:57.013544   30330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:57.013884   30330 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:57.014089   30330 main.go:141] libmachine: (ha-694782-m02) Calling .GetState
	I0416 00:01:57.015738   30330 status.go:330] ha-694782-m02 host status = "Running" (err=<nil>)
	I0416 00:01:57.015754   30330 host.go:66] Checking if "ha-694782-m02" exists ...
	I0416 00:01:57.016131   30330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:57.016190   30330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:57.030907   30330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37165
	I0416 00:01:57.031289   30330 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:57.031769   30330 main.go:141] libmachine: Using API Version  1
	I0416 00:01:57.031798   30330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:57.032180   30330 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:57.032408   30330 main.go:141] libmachine: (ha-694782-m02) Calling .GetIP
	I0416 00:01:57.034962   30330 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:01:57.035355   30330 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0416 00:01:57.035399   30330 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:01:57.035502   30330 host.go:66] Checking if "ha-694782-m02" exists ...
	I0416 00:01:57.035868   30330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:01:57.035918   30330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:01:57.050931   30330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39615
	I0416 00:01:57.051373   30330 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:01:57.051821   30330 main.go:141] libmachine: Using API Version  1
	I0416 00:01:57.051841   30330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:01:57.052183   30330 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:01:57.052347   30330 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0416 00:01:57.052560   30330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:01:57.052583   30330 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0416 00:01:57.055628   30330 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:01:57.056038   30330 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0416 00:01:57.056070   30330 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:01:57.056185   30330 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0416 00:01:57.056349   30330 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0416 00:01:57.056518   30330 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0416 00:01:57.056648   30330 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa Username:docker}
	W0416 00:01:58.297510   30330 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.42:22: connect: no route to host
	I0416 00:01:58.297553   30330 retry.go:31] will retry after 158.595586ms: dial tcp 192.168.39.42:22: connect: no route to host
	W0416 00:02:01.369451   30330 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.42:22: connect: no route to host
	W0416 00:02:01.369568   30330 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.42:22: connect: no route to host
	E0416 00:02:01.369594   30330 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.42:22: connect: no route to host
	I0416 00:02:01.369605   30330 status.go:257] ha-694782-m02 status: &{Name:ha-694782-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0416 00:02:01.369636   30330 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.42:22: connect: no route to host
	I0416 00:02:01.369650   30330 status.go:255] checking status of ha-694782-m03 ...
	I0416 00:02:01.369962   30330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:01.370016   30330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:01.384420   30330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42159
	I0416 00:02:01.384905   30330 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:01.385465   30330 main.go:141] libmachine: Using API Version  1
	I0416 00:02:01.385486   30330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:01.385777   30330 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:01.385960   30330 main.go:141] libmachine: (ha-694782-m03) Calling .GetState
	I0416 00:02:01.387364   30330 status.go:330] ha-694782-m03 host status = "Running" (err=<nil>)
	I0416 00:02:01.387378   30330 host.go:66] Checking if "ha-694782-m03" exists ...
	I0416 00:02:01.387662   30330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:01.387697   30330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:01.401820   30330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36117
	I0416 00:02:01.402262   30330 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:01.402736   30330 main.go:141] libmachine: Using API Version  1
	I0416 00:02:01.402756   30330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:01.403017   30330 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:01.403216   30330 main.go:141] libmachine: (ha-694782-m03) Calling .GetIP
	I0416 00:02:01.405935   30330 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:01.406274   30330 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0416 00:02:01.406290   30330 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:01.406490   30330 host.go:66] Checking if "ha-694782-m03" exists ...
	I0416 00:02:01.406790   30330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:01.406824   30330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:01.421434   30330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43191
	I0416 00:02:01.421813   30330 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:01.422290   30330 main.go:141] libmachine: Using API Version  1
	I0416 00:02:01.422311   30330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:01.422620   30330 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:01.422873   30330 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0416 00:02:01.423098   30330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:01.423120   30330 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0416 00:02:01.425874   30330 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:01.426267   30330 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0416 00:02:01.426305   30330 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:01.426473   30330 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0416 00:02:01.426656   30330 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0416 00:02:01.426804   30330 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0416 00:02:01.426945   30330 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa Username:docker}
	I0416 00:02:01.511692   30330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:01.528365   30330 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:02:01.528394   30330 api_server.go:166] Checking apiserver status ...
	I0416 00:02:01.528436   30330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:02:01.542810   30330 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1540/cgroup
	W0416 00:02:01.553961   30330 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1540/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:02:01.554012   30330 ssh_runner.go:195] Run: ls
	I0416 00:02:01.558529   30330 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:02:01.563388   30330 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:02:01.563410   30330 status.go:422] ha-694782-m03 apiserver status = Running (err=<nil>)
	I0416 00:02:01.563421   30330 status.go:257] ha-694782-m03 status: &{Name:ha-694782-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:02:01.563461   30330 status.go:255] checking status of ha-694782-m04 ...
	I0416 00:02:01.563741   30330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:01.563779   30330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:01.578587   30330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44443
	I0416 00:02:01.578999   30330 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:01.579438   30330 main.go:141] libmachine: Using API Version  1
	I0416 00:02:01.579452   30330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:01.579772   30330 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:01.579963   30330 main.go:141] libmachine: (ha-694782-m04) Calling .GetState
	I0416 00:02:01.581386   30330 status.go:330] ha-694782-m04 host status = "Running" (err=<nil>)
	I0416 00:02:01.581402   30330 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:02:01.581679   30330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:01.581711   30330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:01.596073   30330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37173
	I0416 00:02:01.596467   30330 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:01.596909   30330 main.go:141] libmachine: Using API Version  1
	I0416 00:02:01.596933   30330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:01.597280   30330 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:01.597474   30330 main.go:141] libmachine: (ha-694782-m04) Calling .GetIP
	I0416 00:02:01.600301   30330 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:01.600709   30330 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:58:43 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:02:01.600741   30330 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:01.600858   30330 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:02:01.601281   30330 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:01.601324   30330 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:01.618109   30330 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38181
	I0416 00:02:01.618502   30330 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:01.619004   30330 main.go:141] libmachine: Using API Version  1
	I0416 00:02:01.619024   30330 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:01.619330   30330 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:01.619546   30330 main.go:141] libmachine: (ha-694782-m04) Calling .DriverName
	I0416 00:02:01.619713   30330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:01.619730   30330 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHHostname
	I0416 00:02:01.622487   30330 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:01.622920   30330 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:58:43 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:02:01.622959   30330 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:01.623055   30330 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHPort
	I0416 00:02:01.623248   30330 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHKeyPath
	I0416 00:02:01.623418   30330 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHUsername
	I0416 00:02:01.623525   30330 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m04/id_rsa Username:docker}
	I0416 00:02:01.704884   30330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:01.721151   30330 status.go:257] ha-694782-m04 status: &{Name:ha-694782-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr: exit status 3 (4.964381896s)

                                                
                                                
-- stdout --
	ha-694782
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-694782-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:02:02.959850   30429 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:02:02.960001   30429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:02:02.960012   30429 out.go:304] Setting ErrFile to fd 2...
	I0416 00:02:02.960019   30429 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:02:02.960211   30429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:02:02.960405   30429 out.go:298] Setting JSON to false
	I0416 00:02:02.960440   30429 mustload.go:65] Loading cluster: ha-694782
	I0416 00:02:02.960537   30429 notify.go:220] Checking for updates...
	I0416 00:02:02.960836   30429 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:02:02.960852   30429 status.go:255] checking status of ha-694782 ...
	I0416 00:02:02.961234   30429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:02.961304   30429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:02.976208   30429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39001
	I0416 00:02:02.976620   30429 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:02.977230   30429 main.go:141] libmachine: Using API Version  1
	I0416 00:02:02.977256   30429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:02.977603   30429 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:02.977827   30429 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0416 00:02:02.979305   30429 status.go:330] ha-694782 host status = "Running" (err=<nil>)
	I0416 00:02:02.979318   30429 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:02:02.979619   30429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:02.979655   30429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:02.994938   30429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42843
	I0416 00:02:02.995271   30429 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:02.995680   30429 main.go:141] libmachine: Using API Version  1
	I0416 00:02:02.995699   30429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:02.996034   30429 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:02.996219   30429 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0416 00:02:02.998928   30429 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:02.999373   30429 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:02:02.999409   30429 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:02.999506   30429 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:02:02.999771   30429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:02.999815   30429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:03.014598   30429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35819
	I0416 00:02:03.014975   30429 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:03.015438   30429 main.go:141] libmachine: Using API Version  1
	I0416 00:02:03.015459   30429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:03.015771   30429 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:03.016016   30429 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:02:03.016210   30429 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:03.016236   30429 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:02:03.018858   30429 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:03.019225   30429 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:02:03.019259   30429 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:03.019376   30429 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:02:03.019535   30429 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:02:03.019667   30429 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:02:03.019792   30429 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0416 00:02:03.105452   30429 ssh_runner.go:195] Run: systemctl --version
	I0416 00:02:03.113725   30429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:03.128425   30429 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:02:03.128456   30429 api_server.go:166] Checking apiserver status ...
	I0416 00:02:03.128485   30429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:02:03.144236   30429 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup
	W0416 00:02:03.153723   30429 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:02:03.153773   30429 ssh_runner.go:195] Run: ls
	I0416 00:02:03.158108   30429 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:02:03.163697   30429 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:02:03.163715   30429 status.go:422] ha-694782 apiserver status = Running (err=<nil>)
	I0416 00:02:03.163723   30429 status.go:257] ha-694782 status: &{Name:ha-694782 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:02:03.163737   30429 status.go:255] checking status of ha-694782-m02 ...
	I0416 00:02:03.164025   30429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:03.164083   30429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:03.179661   30429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37265
	I0416 00:02:03.180072   30429 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:03.180545   30429 main.go:141] libmachine: Using API Version  1
	I0416 00:02:03.180565   30429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:03.180865   30429 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:03.181027   30429 main.go:141] libmachine: (ha-694782-m02) Calling .GetState
	I0416 00:02:03.182492   30429 status.go:330] ha-694782-m02 host status = "Running" (err=<nil>)
	I0416 00:02:03.182508   30429 host.go:66] Checking if "ha-694782-m02" exists ...
	I0416 00:02:03.182792   30429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:03.182820   30429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:03.196691   30429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46805
	I0416 00:02:03.197085   30429 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:03.197514   30429 main.go:141] libmachine: Using API Version  1
	I0416 00:02:03.197535   30429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:03.197816   30429 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:03.197982   30429 main.go:141] libmachine: (ha-694782-m02) Calling .GetIP
	I0416 00:02:03.200789   30429 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:02:03.201306   30429 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0416 00:02:03.201337   30429 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:02:03.201497   30429 host.go:66] Checking if "ha-694782-m02" exists ...
	I0416 00:02:03.201762   30429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:03.201797   30429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:03.215781   30429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45081
	I0416 00:02:03.216200   30429 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:03.216617   30429 main.go:141] libmachine: Using API Version  1
	I0416 00:02:03.216640   30429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:03.216958   30429 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:03.217141   30429 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0416 00:02:03.217366   30429 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:03.217395   30429 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0416 00:02:03.219842   30429 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:02:03.220186   30429 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0416 00:02:03.220220   30429 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:02:03.220364   30429 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0416 00:02:03.220548   30429 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0416 00:02:03.220709   30429 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0416 00:02:03.220852   30429 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa Username:docker}
	W0416 00:02:04.445411   30429 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.42:22: connect: no route to host
	I0416 00:02:04.445472   30429 retry.go:31] will retry after 189.305016ms: dial tcp 192.168.39.42:22: connect: no route to host
	W0416 00:02:07.517406   30429 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.42:22: connect: no route to host
	W0416 00:02:07.517536   30429 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.42:22: connect: no route to host
	E0416 00:02:07.517555   30429 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.42:22: connect: no route to host
	I0416 00:02:07.517562   30429 status.go:257] ha-694782-m02 status: &{Name:ha-694782-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0416 00:02:07.517590   30429 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.42:22: connect: no route to host
	I0416 00:02:07.517599   30429 status.go:255] checking status of ha-694782-m03 ...
	I0416 00:02:07.517963   30429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:07.518008   30429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:07.534842   30429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33331
	I0416 00:02:07.535250   30429 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:07.535775   30429 main.go:141] libmachine: Using API Version  1
	I0416 00:02:07.535802   30429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:07.536085   30429 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:07.536251   30429 main.go:141] libmachine: (ha-694782-m03) Calling .GetState
	I0416 00:02:07.537841   30429 status.go:330] ha-694782-m03 host status = "Running" (err=<nil>)
	I0416 00:02:07.537871   30429 host.go:66] Checking if "ha-694782-m03" exists ...
	I0416 00:02:07.538169   30429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:07.538211   30429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:07.552512   30429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34073
	I0416 00:02:07.552977   30429 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:07.553490   30429 main.go:141] libmachine: Using API Version  1
	I0416 00:02:07.553513   30429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:07.553801   30429 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:07.553971   30429 main.go:141] libmachine: (ha-694782-m03) Calling .GetIP
	I0416 00:02:07.557082   30429 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:07.557577   30429 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0416 00:02:07.557610   30429 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:07.558153   30429 host.go:66] Checking if "ha-694782-m03" exists ...
	I0416 00:02:07.558685   30429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:07.558724   30429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:07.574582   30429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37257
	I0416 00:02:07.575079   30429 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:07.575601   30429 main.go:141] libmachine: Using API Version  1
	I0416 00:02:07.575629   30429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:07.575996   30429 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:07.576226   30429 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0416 00:02:07.576438   30429 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:07.576457   30429 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0416 00:02:07.579200   30429 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:07.579632   30429 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0416 00:02:07.579673   30429 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:07.579841   30429 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0416 00:02:07.580026   30429 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0416 00:02:07.580201   30429 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0416 00:02:07.580339   30429 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa Username:docker}
	I0416 00:02:07.660808   30429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:07.676246   30429 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:02:07.676271   30429 api_server.go:166] Checking apiserver status ...
	I0416 00:02:07.676305   30429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:02:07.690683   30429 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1540/cgroup
	W0416 00:02:07.701811   30429 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1540/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:02:07.701865   30429 ssh_runner.go:195] Run: ls
	I0416 00:02:07.708132   30429 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:02:07.712690   30429 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:02:07.712714   30429 status.go:422] ha-694782-m03 apiserver status = Running (err=<nil>)
	I0416 00:02:07.712724   30429 status.go:257] ha-694782-m03 status: &{Name:ha-694782-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:02:07.712741   30429 status.go:255] checking status of ha-694782-m04 ...
	I0416 00:02:07.713079   30429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:07.713116   30429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:07.727871   30429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35303
	I0416 00:02:07.728283   30429 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:07.728819   30429 main.go:141] libmachine: Using API Version  1
	I0416 00:02:07.728847   30429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:07.729205   30429 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:07.729393   30429 main.go:141] libmachine: (ha-694782-m04) Calling .GetState
	I0416 00:02:07.730941   30429 status.go:330] ha-694782-m04 host status = "Running" (err=<nil>)
	I0416 00:02:07.730957   30429 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:02:07.731229   30429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:07.731269   30429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:07.745345   30429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42365
	I0416 00:02:07.745731   30429 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:07.746256   30429 main.go:141] libmachine: Using API Version  1
	I0416 00:02:07.746285   30429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:07.746599   30429 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:07.746787   30429 main.go:141] libmachine: (ha-694782-m04) Calling .GetIP
	I0416 00:02:07.749594   30429 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:07.750618   30429 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:58:43 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:02:07.750658   30429 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:07.750798   30429 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:02:07.751137   30429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:07.751173   30429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:07.765346   30429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I0416 00:02:07.765731   30429 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:07.766155   30429 main.go:141] libmachine: Using API Version  1
	I0416 00:02:07.766175   30429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:07.766487   30429 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:07.766729   30429 main.go:141] libmachine: (ha-694782-m04) Calling .DriverName
	I0416 00:02:07.766918   30429 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:07.766940   30429 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHHostname
	I0416 00:02:07.769536   30429 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:07.769911   30429 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:58:43 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:02:07.769934   30429 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:07.770057   30429 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHPort
	I0416 00:02:07.770231   30429 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHKeyPath
	I0416 00:02:07.770380   30429 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHUsername
	I0416 00:02:07.770494   30429 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m04/id_rsa Username:docker}
	I0416 00:02:07.853134   30429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:07.867709   30429 status.go:257] ha-694782-m04 status: &{Name:ha-694782-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr: exit status 3 (3.731611013s)

                                                
                                                
-- stdout --
	ha-694782
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-694782-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:02:11.001979   30548 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:02:11.002103   30548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:02:11.002118   30548 out.go:304] Setting ErrFile to fd 2...
	I0416 00:02:11.002125   30548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:02:11.002363   30548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:02:11.002547   30548 out.go:298] Setting JSON to false
	I0416 00:02:11.002576   30548 mustload.go:65] Loading cluster: ha-694782
	I0416 00:02:11.002685   30548 notify.go:220] Checking for updates...
	I0416 00:02:11.002978   30548 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:02:11.002995   30548 status.go:255] checking status of ha-694782 ...
	I0416 00:02:11.003401   30548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:11.003461   30548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:11.022493   30548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42407
	I0416 00:02:11.022962   30548 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:11.023528   30548 main.go:141] libmachine: Using API Version  1
	I0416 00:02:11.023564   30548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:11.023938   30548 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:11.024132   30548 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0416 00:02:11.025795   30548 status.go:330] ha-694782 host status = "Running" (err=<nil>)
	I0416 00:02:11.025814   30548 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:02:11.026117   30548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:11.026149   30548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:11.041370   30548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33733
	I0416 00:02:11.041762   30548 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:11.042177   30548 main.go:141] libmachine: Using API Version  1
	I0416 00:02:11.042198   30548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:11.042451   30548 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:11.042750   30548 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0416 00:02:11.045614   30548 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:11.046132   30548 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:02:11.046161   30548 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:11.046279   30548 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:02:11.046660   30548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:11.046702   30548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:11.060848   30548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36921
	I0416 00:02:11.061223   30548 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:11.061651   30548 main.go:141] libmachine: Using API Version  1
	I0416 00:02:11.061669   30548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:11.061950   30548 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:11.062133   30548 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:02:11.062306   30548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:11.062335   30548 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:02:11.064701   30548 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:11.065111   30548 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:02:11.065136   30548 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:11.065322   30548 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:02:11.065495   30548 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:02:11.065608   30548 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:02:11.065761   30548 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0416 00:02:11.148569   30548 ssh_runner.go:195] Run: systemctl --version
	I0416 00:02:11.155751   30548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:11.172024   30548 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:02:11.172066   30548 api_server.go:166] Checking apiserver status ...
	I0416 00:02:11.172113   30548 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:02:11.188482   30548 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup
	W0416 00:02:11.199558   30548 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:02:11.199602   30548 ssh_runner.go:195] Run: ls
	I0416 00:02:11.204912   30548 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:02:11.210414   30548 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:02:11.210433   30548 status.go:422] ha-694782 apiserver status = Running (err=<nil>)
	I0416 00:02:11.210442   30548 status.go:257] ha-694782 status: &{Name:ha-694782 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:02:11.210462   30548 status.go:255] checking status of ha-694782-m02 ...
	I0416 00:02:11.210809   30548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:11.210864   30548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:11.227168   30548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39469
	I0416 00:02:11.227605   30548 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:11.228120   30548 main.go:141] libmachine: Using API Version  1
	I0416 00:02:11.228146   30548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:11.228478   30548 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:11.228669   30548 main.go:141] libmachine: (ha-694782-m02) Calling .GetState
	I0416 00:02:11.230097   30548 status.go:330] ha-694782-m02 host status = "Running" (err=<nil>)
	I0416 00:02:11.230120   30548 host.go:66] Checking if "ha-694782-m02" exists ...
	I0416 00:02:11.230393   30548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:11.230428   30548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:11.244218   30548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34287
	I0416 00:02:11.244590   30548 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:11.245125   30548 main.go:141] libmachine: Using API Version  1
	I0416 00:02:11.245146   30548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:11.245462   30548 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:11.245664   30548 main.go:141] libmachine: (ha-694782-m02) Calling .GetIP
	I0416 00:02:11.248353   30548 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:02:11.248750   30548 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0416 00:02:11.248779   30548 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:02:11.248913   30548 host.go:66] Checking if "ha-694782-m02" exists ...
	I0416 00:02:11.249319   30548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:11.249362   30548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:11.264351   30548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I0416 00:02:11.264727   30548 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:11.265191   30548 main.go:141] libmachine: Using API Version  1
	I0416 00:02:11.265222   30548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:11.265505   30548 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:11.265683   30548 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0416 00:02:11.265826   30548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:11.265845   30548 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0416 00:02:11.268393   30548 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:02:11.268779   30548 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0416 00:02:11.268804   30548 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:02:11.268959   30548 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0416 00:02:11.269083   30548 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0416 00:02:11.269242   30548 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0416 00:02:11.269336   30548 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa Username:docker}
	W0416 00:02:14.329405   30548 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.42:22: connect: no route to host
	W0416 00:02:14.329498   30548 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.42:22: connect: no route to host
	E0416 00:02:14.329537   30548 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.42:22: connect: no route to host
	I0416 00:02:14.329548   30548 status.go:257] ha-694782-m02 status: &{Name:ha-694782-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0416 00:02:14.329572   30548 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.42:22: connect: no route to host
	I0416 00:02:14.329587   30548 status.go:255] checking status of ha-694782-m03 ...
	I0416 00:02:14.330020   30548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:14.330073   30548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:14.345344   30548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37243
	I0416 00:02:14.345771   30548 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:14.346247   30548 main.go:141] libmachine: Using API Version  1
	I0416 00:02:14.346262   30548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:14.346535   30548 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:14.346693   30548 main.go:141] libmachine: (ha-694782-m03) Calling .GetState
	I0416 00:02:14.348072   30548 status.go:330] ha-694782-m03 host status = "Running" (err=<nil>)
	I0416 00:02:14.348089   30548 host.go:66] Checking if "ha-694782-m03" exists ...
	I0416 00:02:14.348410   30548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:14.348457   30548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:14.363121   30548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45207
	I0416 00:02:14.363518   30548 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:14.363955   30548 main.go:141] libmachine: Using API Version  1
	I0416 00:02:14.363986   30548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:14.364281   30548 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:14.364459   30548 main.go:141] libmachine: (ha-694782-m03) Calling .GetIP
	I0416 00:02:14.366813   30548 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:14.367251   30548 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0416 00:02:14.367285   30548 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:14.367375   30548 host.go:66] Checking if "ha-694782-m03" exists ...
	I0416 00:02:14.367666   30548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:14.367698   30548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:14.382352   30548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40941
	I0416 00:02:14.382715   30548 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:14.383141   30548 main.go:141] libmachine: Using API Version  1
	I0416 00:02:14.383156   30548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:14.383439   30548 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:14.383655   30548 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0416 00:02:14.383838   30548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:14.383860   30548 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0416 00:02:14.386370   30548 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:14.386725   30548 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0416 00:02:14.386755   30548 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:14.386891   30548 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0416 00:02:14.387090   30548 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0416 00:02:14.387236   30548 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0416 00:02:14.387361   30548 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa Username:docker}
	I0416 00:02:14.464947   30548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:14.481978   30548 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:02:14.482003   30548 api_server.go:166] Checking apiserver status ...
	I0416 00:02:14.482031   30548 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:02:14.506756   30548 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1540/cgroup
	W0416 00:02:14.517543   30548 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1540/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:02:14.517584   30548 ssh_runner.go:195] Run: ls
	I0416 00:02:14.522012   30548 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:02:14.526069   30548 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:02:14.526088   30548 status.go:422] ha-694782-m03 apiserver status = Running (err=<nil>)
	I0416 00:02:14.526098   30548 status.go:257] ha-694782-m03 status: &{Name:ha-694782-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:02:14.526118   30548 status.go:255] checking status of ha-694782-m04 ...
	I0416 00:02:14.526437   30548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:14.526481   30548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:14.540934   30548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44259
	I0416 00:02:14.541381   30548 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:14.541831   30548 main.go:141] libmachine: Using API Version  1
	I0416 00:02:14.541849   30548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:14.542170   30548 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:14.542368   30548 main.go:141] libmachine: (ha-694782-m04) Calling .GetState
	I0416 00:02:14.543859   30548 status.go:330] ha-694782-m04 host status = "Running" (err=<nil>)
	I0416 00:02:14.543877   30548 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:02:14.544284   30548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:14.544342   30548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:14.559194   30548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44575
	I0416 00:02:14.559662   30548 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:14.560137   30548 main.go:141] libmachine: Using API Version  1
	I0416 00:02:14.560157   30548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:14.560438   30548 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:14.560603   30548 main.go:141] libmachine: (ha-694782-m04) Calling .GetIP
	I0416 00:02:14.563395   30548 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:14.563818   30548 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:58:43 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:02:14.563860   30548 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:14.563937   30548 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:02:14.564267   30548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:14.564299   30548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:14.579645   30548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0416 00:02:14.580026   30548 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:14.580461   30548 main.go:141] libmachine: Using API Version  1
	I0416 00:02:14.580483   30548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:14.580753   30548 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:14.580929   30548 main.go:141] libmachine: (ha-694782-m04) Calling .DriverName
	I0416 00:02:14.581107   30548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:14.581136   30548 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHHostname
	I0416 00:02:14.583702   30548 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:14.584076   30548 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:58:43 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:02:14.584113   30548 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:14.584250   30548 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHPort
	I0416 00:02:14.584396   30548 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHKeyPath
	I0416 00:02:14.584497   30548 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHUsername
	I0416 00:02:14.584628   30548 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m04/id_rsa Username:docker}
	I0416 00:02:14.664594   30548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:14.678564   30548 status.go:257] ha-694782-m04 status: &{Name:ha-694782-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr
E0416 00:02:20.169699   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr: exit status 3 (3.73003459s)

                                                
                                                
-- stdout --
	ha-694782
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-694782-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:02:17.564776   30648 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:02:17.565005   30648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:02:17.565014   30648 out.go:304] Setting ErrFile to fd 2...
	I0416 00:02:17.565018   30648 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:02:17.565272   30648 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:02:17.565439   30648 out.go:298] Setting JSON to false
	I0416 00:02:17.565466   30648 mustload.go:65] Loading cluster: ha-694782
	I0416 00:02:17.565584   30648 notify.go:220] Checking for updates...
	I0416 00:02:17.565816   30648 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:02:17.565829   30648 status.go:255] checking status of ha-694782 ...
	I0416 00:02:17.566216   30648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:17.566286   30648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:17.583369   30648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39405
	I0416 00:02:17.583852   30648 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:17.584497   30648 main.go:141] libmachine: Using API Version  1
	I0416 00:02:17.584517   30648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:17.584906   30648 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:17.585091   30648 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0416 00:02:17.586739   30648 status.go:330] ha-694782 host status = "Running" (err=<nil>)
	I0416 00:02:17.586758   30648 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:02:17.587182   30648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:17.587227   30648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:17.604086   30648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45743
	I0416 00:02:17.604436   30648 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:17.604828   30648 main.go:141] libmachine: Using API Version  1
	I0416 00:02:17.604847   30648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:17.605146   30648 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:17.605360   30648 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0416 00:02:17.607996   30648 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:17.608393   30648 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:02:17.608432   30648 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:17.608590   30648 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:02:17.608898   30648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:17.608941   30648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:17.622819   30648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42205
	I0416 00:02:17.623176   30648 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:17.623575   30648 main.go:141] libmachine: Using API Version  1
	I0416 00:02:17.623597   30648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:17.623901   30648 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:17.624060   30648 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:02:17.624226   30648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:17.624258   30648 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:02:17.626892   30648 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:17.627284   30648 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:02:17.627310   30648 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:17.627430   30648 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:02:17.627616   30648 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:02:17.627742   30648 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:02:17.627885   30648 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0416 00:02:17.712613   30648 ssh_runner.go:195] Run: systemctl --version
	I0416 00:02:17.718822   30648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:17.735346   30648 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:02:17.735379   30648 api_server.go:166] Checking apiserver status ...
	I0416 00:02:17.735410   30648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:02:17.750006   30648 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup
	W0416 00:02:17.761845   30648 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:02:17.761883   30648 ssh_runner.go:195] Run: ls
	I0416 00:02:17.766508   30648 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:02:17.773215   30648 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:02:17.773235   30648 status.go:422] ha-694782 apiserver status = Running (err=<nil>)
	I0416 00:02:17.773244   30648 status.go:257] ha-694782 status: &{Name:ha-694782 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:02:17.773265   30648 status.go:255] checking status of ha-694782-m02 ...
	I0416 00:02:17.773560   30648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:17.773594   30648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:17.788103   30648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37643
	I0416 00:02:17.788501   30648 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:17.788927   30648 main.go:141] libmachine: Using API Version  1
	I0416 00:02:17.788948   30648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:17.789328   30648 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:17.789515   30648 main.go:141] libmachine: (ha-694782-m02) Calling .GetState
	I0416 00:02:17.791046   30648 status.go:330] ha-694782-m02 host status = "Running" (err=<nil>)
	I0416 00:02:17.791061   30648 host.go:66] Checking if "ha-694782-m02" exists ...
	I0416 00:02:17.791334   30648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:17.791364   30648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:17.805506   30648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35801
	I0416 00:02:17.805913   30648 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:17.806404   30648 main.go:141] libmachine: Using API Version  1
	I0416 00:02:17.806418   30648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:17.806738   30648 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:17.806997   30648 main.go:141] libmachine: (ha-694782-m02) Calling .GetIP
	I0416 00:02:17.809820   30648 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:02:17.810241   30648 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0416 00:02:17.810272   30648 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:02:17.810393   30648 host.go:66] Checking if "ha-694782-m02" exists ...
	I0416 00:02:17.810685   30648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:17.810726   30648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:17.828753   30648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45893
	I0416 00:02:17.829153   30648 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:17.829630   30648 main.go:141] libmachine: Using API Version  1
	I0416 00:02:17.829650   30648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:17.829972   30648 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:17.830154   30648 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0416 00:02:17.830326   30648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:17.830350   30648 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0416 00:02:17.832825   30648 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:02:17.833227   30648 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0416 00:02:17.833252   30648 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:02:17.833405   30648 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0416 00:02:17.833582   30648 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0416 00:02:17.833699   30648 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0416 00:02:17.833821   30648 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa Username:docker}
	W0416 00:02:20.889399   30648 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.42:22: connect: no route to host
	W0416 00:02:20.889501   30648 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.42:22: connect: no route to host
	E0416 00:02:20.889516   30648 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.42:22: connect: no route to host
	I0416 00:02:20.889523   30648 status.go:257] ha-694782-m02 status: &{Name:ha-694782-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0416 00:02:20.889541   30648 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.42:22: connect: no route to host
	I0416 00:02:20.889549   30648 status.go:255] checking status of ha-694782-m03 ...
	I0416 00:02:20.889846   30648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:20.889885   30648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:20.904437   30648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44875
	I0416 00:02:20.904886   30648 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:20.905371   30648 main.go:141] libmachine: Using API Version  1
	I0416 00:02:20.905394   30648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:20.905779   30648 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:20.905950   30648 main.go:141] libmachine: (ha-694782-m03) Calling .GetState
	I0416 00:02:20.907399   30648 status.go:330] ha-694782-m03 host status = "Running" (err=<nil>)
	I0416 00:02:20.907415   30648 host.go:66] Checking if "ha-694782-m03" exists ...
	I0416 00:02:20.907789   30648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:20.907829   30648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:20.923166   30648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32793
	I0416 00:02:20.923600   30648 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:20.924060   30648 main.go:141] libmachine: Using API Version  1
	I0416 00:02:20.924087   30648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:20.924488   30648 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:20.924675   30648 main.go:141] libmachine: (ha-694782-m03) Calling .GetIP
	I0416 00:02:20.927142   30648 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:20.927529   30648 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0416 00:02:20.927557   30648 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:20.927677   30648 host.go:66] Checking if "ha-694782-m03" exists ...
	I0416 00:02:20.928011   30648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:20.928045   30648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:20.943089   30648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37299
	I0416 00:02:20.943458   30648 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:20.943895   30648 main.go:141] libmachine: Using API Version  1
	I0416 00:02:20.943919   30648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:20.944165   30648 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:20.944363   30648 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0416 00:02:20.944534   30648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:20.944555   30648 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0416 00:02:20.947286   30648 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:20.947699   30648 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0416 00:02:20.947727   30648 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:20.947906   30648 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0416 00:02:20.948052   30648 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0416 00:02:20.948193   30648 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0416 00:02:20.948300   30648 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa Username:docker}
	I0416 00:02:21.028947   30648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:21.046697   30648 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:02:21.046721   30648 api_server.go:166] Checking apiserver status ...
	I0416 00:02:21.046753   30648 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:02:21.061882   30648 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1540/cgroup
	W0416 00:02:21.073235   30648 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1540/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:02:21.073284   30648 ssh_runner.go:195] Run: ls
	I0416 00:02:21.078135   30648 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:02:21.084968   30648 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:02:21.084998   30648 status.go:422] ha-694782-m03 apiserver status = Running (err=<nil>)
	I0416 00:02:21.085006   30648 status.go:257] ha-694782-m03 status: &{Name:ha-694782-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:02:21.085024   30648 status.go:255] checking status of ha-694782-m04 ...
	I0416 00:02:21.085377   30648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:21.085413   30648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:21.100493   30648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36747
	I0416 00:02:21.100933   30648 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:21.101405   30648 main.go:141] libmachine: Using API Version  1
	I0416 00:02:21.101427   30648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:21.101849   30648 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:21.102069   30648 main.go:141] libmachine: (ha-694782-m04) Calling .GetState
	I0416 00:02:21.103883   30648 status.go:330] ha-694782-m04 host status = "Running" (err=<nil>)
	I0416 00:02:21.103898   30648 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:02:21.104165   30648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:21.104200   30648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:21.118438   30648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46485
	I0416 00:02:21.118870   30648 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:21.119285   30648 main.go:141] libmachine: Using API Version  1
	I0416 00:02:21.119298   30648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:21.119607   30648 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:21.119777   30648 main.go:141] libmachine: (ha-694782-m04) Calling .GetIP
	I0416 00:02:21.122533   30648 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:21.122944   30648 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:58:43 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:02:21.122967   30648 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:21.123103   30648 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:02:21.123401   30648 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:21.123440   30648 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:21.137814   30648 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45403
	I0416 00:02:21.138242   30648 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:21.138707   30648 main.go:141] libmachine: Using API Version  1
	I0416 00:02:21.138728   30648 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:21.139074   30648 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:21.139285   30648 main.go:141] libmachine: (ha-694782-m04) Calling .DriverName
	I0416 00:02:21.139479   30648 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:21.139498   30648 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHHostname
	I0416 00:02:21.142016   30648 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:21.142498   30648 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:58:43 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:02:21.142533   30648 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:21.142649   30648 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHPort
	I0416 00:02:21.142812   30648 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHKeyPath
	I0416 00:02:21.142968   30648 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHUsername
	I0416 00:02:21.143129   30648 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m04/id_rsa Username:docker}
	I0416 00:02:21.224622   30648 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:21.240455   30648 status.go:257] ha-694782-m04 status: &{Name:ha-694782-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr: exit status 7 (666.727589ms)

                                                
                                                
-- stdout --
	ha-694782
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m02
	type: Control Plane
	host: Stopping
	kubelet: Stopping
	apiserver: Stopping
	kubeconfig: Stopping
	
	ha-694782-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:02:28.498178   30780 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:02:28.498338   30780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:02:28.498349   30780 out.go:304] Setting ErrFile to fd 2...
	I0416 00:02:28.498356   30780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:02:28.498643   30780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:02:28.498880   30780 out.go:298] Setting JSON to false
	I0416 00:02:28.498908   30780 mustload.go:65] Loading cluster: ha-694782
	I0416 00:02:28.499073   30780 notify.go:220] Checking for updates...
	I0416 00:02:28.499454   30780 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:02:28.499473   30780 status.go:255] checking status of ha-694782 ...
	I0416 00:02:28.500028   30780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:28.500085   30780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:28.524279   30780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43203
	I0416 00:02:28.524747   30780 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:28.525426   30780 main.go:141] libmachine: Using API Version  1
	I0416 00:02:28.525448   30780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:28.525767   30780 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:28.525950   30780 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0416 00:02:28.527643   30780 status.go:330] ha-694782 host status = "Running" (err=<nil>)
	I0416 00:02:28.527657   30780 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:02:28.527917   30780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:28.527947   30780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:28.543045   30780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41229
	I0416 00:02:28.543395   30780 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:28.543889   30780 main.go:141] libmachine: Using API Version  1
	I0416 00:02:28.543930   30780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:28.544248   30780 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:28.544445   30780 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0416 00:02:28.547356   30780 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:28.547757   30780 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:02:28.547789   30780 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:28.547936   30780 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:02:28.548240   30780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:28.548279   30780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:28.562851   30780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40621
	I0416 00:02:28.563225   30780 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:28.563689   30780 main.go:141] libmachine: Using API Version  1
	I0416 00:02:28.563709   30780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:28.564015   30780 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:28.564196   30780 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:02:28.564381   30780 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:28.564414   30780 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:02:28.567070   30780 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:28.567498   30780 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:02:28.567528   30780 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:28.567612   30780 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:02:28.567764   30780 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:02:28.567875   30780 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:02:28.568002   30780 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0416 00:02:28.653470   30780 ssh_runner.go:195] Run: systemctl --version
	I0416 00:02:28.660638   30780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:28.678123   30780 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:02:28.678154   30780 api_server.go:166] Checking apiserver status ...
	I0416 00:02:28.678185   30780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:02:28.695859   30780 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup
	W0416 00:02:28.710901   30780 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:02:28.710953   30780 ssh_runner.go:195] Run: ls
	I0416 00:02:28.716057   30780 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:02:28.720321   30780 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:02:28.720344   30780 status.go:422] ha-694782 apiserver status = Running (err=<nil>)
	I0416 00:02:28.720356   30780 status.go:257] ha-694782 status: &{Name:ha-694782 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:02:28.720387   30780 status.go:255] checking status of ha-694782-m02 ...
	I0416 00:02:28.720657   30780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:28.720704   30780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:28.736020   30780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33513
	I0416 00:02:28.736427   30780 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:28.736882   30780 main.go:141] libmachine: Using API Version  1
	I0416 00:02:28.736903   30780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:28.737219   30780 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:28.737459   30780 main.go:141] libmachine: (ha-694782-m02) Calling .GetState
	I0416 00:02:28.739172   30780 status.go:330] ha-694782-m02 host status = "Stopping" (err=<nil>)
	I0416 00:02:28.739186   30780 status.go:343] host is not running, skipping remaining checks
	I0416 00:02:28.739193   30780 status.go:257] ha-694782-m02 status: &{Name:ha-694782-m02 Host:Stopping Kubelet:Stopping APIServer:Stopping Kubeconfig:Stopping Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:02:28.739217   30780 status.go:255] checking status of ha-694782-m03 ...
	I0416 00:02:28.739490   30780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:28.739522   30780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:28.755136   30780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45803
	I0416 00:02:28.755539   30780 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:28.756028   30780 main.go:141] libmachine: Using API Version  1
	I0416 00:02:28.756050   30780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:28.756330   30780 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:28.756532   30780 main.go:141] libmachine: (ha-694782-m03) Calling .GetState
	I0416 00:02:28.758087   30780 status.go:330] ha-694782-m03 host status = "Running" (err=<nil>)
	I0416 00:02:28.758104   30780 host.go:66] Checking if "ha-694782-m03" exists ...
	I0416 00:02:28.758411   30780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:28.758448   30780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:28.773193   30780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43053
	I0416 00:02:28.773764   30780 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:28.774325   30780 main.go:141] libmachine: Using API Version  1
	I0416 00:02:28.774354   30780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:28.774731   30780 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:28.774977   30780 main.go:141] libmachine: (ha-694782-m03) Calling .GetIP
	I0416 00:02:28.777779   30780 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:28.778254   30780 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0416 00:02:28.778286   30780 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:28.778453   30780 host.go:66] Checking if "ha-694782-m03" exists ...
	I0416 00:02:28.778739   30780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:28.778773   30780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:28.803929   30780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45823
	I0416 00:02:28.804362   30780 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:28.804843   30780 main.go:141] libmachine: Using API Version  1
	I0416 00:02:28.804869   30780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:28.805180   30780 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:28.805365   30780 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0416 00:02:28.805560   30780 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:28.805580   30780 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0416 00:02:28.808696   30780 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:28.809116   30780 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0416 00:02:28.809146   30780 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:28.809276   30780 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0416 00:02:28.809431   30780 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0416 00:02:28.809574   30780 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0416 00:02:28.809762   30780 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa Username:docker}
	I0416 00:02:28.892805   30780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:28.909246   30780 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:02:28.909271   30780 api_server.go:166] Checking apiserver status ...
	I0416 00:02:28.909310   30780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:02:28.923897   30780 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1540/cgroup
	W0416 00:02:28.934113   30780 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1540/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:02:28.934177   30780 ssh_runner.go:195] Run: ls
	I0416 00:02:28.938950   30780 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:02:28.943644   30780 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:02:28.943672   30780 status.go:422] ha-694782-m03 apiserver status = Running (err=<nil>)
	I0416 00:02:28.943681   30780 status.go:257] ha-694782-m03 status: &{Name:ha-694782-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:02:28.943695   30780 status.go:255] checking status of ha-694782-m04 ...
	I0416 00:02:28.944038   30780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:28.944073   30780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:28.959719   30780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45429
	I0416 00:02:28.960489   30780 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:28.961791   30780 main.go:141] libmachine: Using API Version  1
	I0416 00:02:28.961813   30780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:28.962185   30780 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:28.962391   30780 main.go:141] libmachine: (ha-694782-m04) Calling .GetState
	I0416 00:02:28.964026   30780 status.go:330] ha-694782-m04 host status = "Running" (err=<nil>)
	I0416 00:02:28.964043   30780 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:02:28.964323   30780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:28.964357   30780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:28.978636   30780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38927
	I0416 00:02:28.979126   30780 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:28.979614   30780 main.go:141] libmachine: Using API Version  1
	I0416 00:02:28.979640   30780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:28.979945   30780 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:28.980108   30780 main.go:141] libmachine: (ha-694782-m04) Calling .GetIP
	I0416 00:02:28.982623   30780 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:28.983015   30780 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:58:43 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:02:28.983039   30780 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:28.983130   30780 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:02:28.983395   30780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:28.983431   30780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:28.997803   30780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41587
	I0416 00:02:28.998676   30780 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:29.000064   30780 main.go:141] libmachine: Using API Version  1
	I0416 00:02:29.000088   30780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:29.000660   30780 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:29.000860   30780 main.go:141] libmachine: (ha-694782-m04) Calling .DriverName
	I0416 00:02:29.001071   30780 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:29.001090   30780 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHHostname
	I0416 00:02:29.004020   30780 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:29.004470   30780 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:58:43 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:02:29.004507   30780 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:29.004656   30780 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHPort
	I0416 00:02:29.004801   30780 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHKeyPath
	I0416 00:02:29.004954   30780 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHUsername
	I0416 00:02:29.005133   30780 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m04/id_rsa Username:docker}
	I0416 00:02:29.084824   30780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:29.099984   30780 status.go:257] ha-694782-m04 status: &{Name:ha-694782-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr: exit status 7 (619.287247ms)

                                                
                                                
-- stdout --
	ha-694782
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-694782-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:02:38.701228   30904 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:02:38.701491   30904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:02:38.701501   30904 out.go:304] Setting ErrFile to fd 2...
	I0416 00:02:38.701506   30904 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:02:38.701692   30904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:02:38.701880   30904 out.go:298] Setting JSON to false
	I0416 00:02:38.701907   30904 mustload.go:65] Loading cluster: ha-694782
	I0416 00:02:38.701964   30904 notify.go:220] Checking for updates...
	I0416 00:02:38.702276   30904 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:02:38.702289   30904 status.go:255] checking status of ha-694782 ...
	I0416 00:02:38.702662   30904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:38.702714   30904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:38.719581   30904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39977
	I0416 00:02:38.720033   30904 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:38.720552   30904 main.go:141] libmachine: Using API Version  1
	I0416 00:02:38.720572   30904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:38.720965   30904 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:38.721167   30904 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0416 00:02:38.722869   30904 status.go:330] ha-694782 host status = "Running" (err=<nil>)
	I0416 00:02:38.722886   30904 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:02:38.723162   30904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:38.723195   30904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:38.738049   30904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45877
	I0416 00:02:38.738434   30904 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:38.738881   30904 main.go:141] libmachine: Using API Version  1
	I0416 00:02:38.738899   30904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:38.739270   30904 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:38.739448   30904 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0416 00:02:38.742406   30904 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:38.742858   30904 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:02:38.742894   30904 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:38.742962   30904 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:02:38.743261   30904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:38.743293   30904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:38.758383   30904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35249
	I0416 00:02:38.758739   30904 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:38.759190   30904 main.go:141] libmachine: Using API Version  1
	I0416 00:02:38.759205   30904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:38.759543   30904 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:38.759722   30904 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:02:38.759909   30904 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:38.759933   30904 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:02:38.762925   30904 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:38.763294   30904 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:02:38.763327   30904 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:38.763474   30904 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:02:38.763622   30904 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:02:38.763787   30904 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:02:38.763935   30904 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0416 00:02:38.848815   30904 ssh_runner.go:195] Run: systemctl --version
	I0416 00:02:38.854991   30904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:38.869898   30904 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:02:38.869926   30904 api_server.go:166] Checking apiserver status ...
	I0416 00:02:38.869955   30904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:02:38.884380   30904 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup
	W0416 00:02:38.894874   30904 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:02:38.894918   30904 ssh_runner.go:195] Run: ls
	I0416 00:02:38.899169   30904 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:02:38.903507   30904 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:02:38.903527   30904 status.go:422] ha-694782 apiserver status = Running (err=<nil>)
	I0416 00:02:38.903537   30904 status.go:257] ha-694782 status: &{Name:ha-694782 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:02:38.903550   30904 status.go:255] checking status of ha-694782-m02 ...
	I0416 00:02:38.903817   30904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:38.903851   30904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:38.918198   30904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42647
	I0416 00:02:38.918576   30904 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:38.918981   30904 main.go:141] libmachine: Using API Version  1
	I0416 00:02:38.919006   30904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:38.919352   30904 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:38.919536   30904 main.go:141] libmachine: (ha-694782-m02) Calling .GetState
	I0416 00:02:38.920964   30904 status.go:330] ha-694782-m02 host status = "Stopped" (err=<nil>)
	I0416 00:02:38.920979   30904 status.go:343] host is not running, skipping remaining checks
	I0416 00:02:38.920987   30904 status.go:257] ha-694782-m02 status: &{Name:ha-694782-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:02:38.921006   30904 status.go:255] checking status of ha-694782-m03 ...
	I0416 00:02:38.921315   30904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:38.921346   30904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:38.935124   30904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46001
	I0416 00:02:38.935465   30904 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:38.935877   30904 main.go:141] libmachine: Using API Version  1
	I0416 00:02:38.935898   30904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:38.936195   30904 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:38.936389   30904 main.go:141] libmachine: (ha-694782-m03) Calling .GetState
	I0416 00:02:38.937774   30904 status.go:330] ha-694782-m03 host status = "Running" (err=<nil>)
	I0416 00:02:38.937797   30904 host.go:66] Checking if "ha-694782-m03" exists ...
	I0416 00:02:38.938109   30904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:38.938145   30904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:38.952492   30904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41779
	I0416 00:02:38.952892   30904 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:38.953408   30904 main.go:141] libmachine: Using API Version  1
	I0416 00:02:38.953428   30904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:38.953749   30904 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:38.953951   30904 main.go:141] libmachine: (ha-694782-m03) Calling .GetIP
	I0416 00:02:38.956481   30904 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:38.956857   30904 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0416 00:02:38.956879   30904 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:38.957019   30904 host.go:66] Checking if "ha-694782-m03" exists ...
	I0416 00:02:38.957376   30904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:38.957415   30904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:38.972709   30904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33635
	I0416 00:02:38.973091   30904 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:38.973563   30904 main.go:141] libmachine: Using API Version  1
	I0416 00:02:38.973581   30904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:38.973880   30904 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:38.974053   30904 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0416 00:02:38.974238   30904 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:38.974272   30904 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0416 00:02:38.977111   30904 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:38.977563   30904 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0416 00:02:38.977590   30904 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:38.977694   30904 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0416 00:02:38.977874   30904 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0416 00:02:38.978047   30904 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0416 00:02:38.978210   30904 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa Username:docker}
	I0416 00:02:39.061370   30904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:39.080023   30904 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:02:39.080050   30904 api_server.go:166] Checking apiserver status ...
	I0416 00:02:39.080081   30904 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:02:39.097185   30904 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1540/cgroup
	W0416 00:02:39.108023   30904 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1540/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:02:39.108071   30904 ssh_runner.go:195] Run: ls
	I0416 00:02:39.112798   30904 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:02:39.116876   30904 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:02:39.116908   30904 status.go:422] ha-694782-m03 apiserver status = Running (err=<nil>)
	I0416 00:02:39.116919   30904 status.go:257] ha-694782-m03 status: &{Name:ha-694782-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:02:39.116942   30904 status.go:255] checking status of ha-694782-m04 ...
	I0416 00:02:39.117334   30904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:39.117378   30904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:39.131730   30904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33413
	I0416 00:02:39.132128   30904 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:39.132530   30904 main.go:141] libmachine: Using API Version  1
	I0416 00:02:39.132552   30904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:39.132842   30904 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:39.133032   30904 main.go:141] libmachine: (ha-694782-m04) Calling .GetState
	I0416 00:02:39.134451   30904 status.go:330] ha-694782-m04 host status = "Running" (err=<nil>)
	I0416 00:02:39.134468   30904 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:02:39.134793   30904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:39.134826   30904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:39.148259   30904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33451
	I0416 00:02:39.148594   30904 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:39.149069   30904 main.go:141] libmachine: Using API Version  1
	I0416 00:02:39.149094   30904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:39.149437   30904 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:39.149615   30904 main.go:141] libmachine: (ha-694782-m04) Calling .GetIP
	I0416 00:02:39.152560   30904 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:39.152985   30904 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:58:43 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:02:39.153013   30904 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:39.153148   30904 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:02:39.153531   30904 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:39.153571   30904 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:39.167234   30904 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33081
	I0416 00:02:39.167604   30904 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:39.168033   30904 main.go:141] libmachine: Using API Version  1
	I0416 00:02:39.168054   30904 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:39.168360   30904 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:39.168510   30904 main.go:141] libmachine: (ha-694782-m04) Calling .DriverName
	I0416 00:02:39.168653   30904 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:39.168674   30904 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHHostname
	I0416 00:02:39.171275   30904 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:39.171696   30904 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:58:43 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:02:39.171734   30904 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:39.171859   30904 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHPort
	I0416 00:02:39.171998   30904 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHKeyPath
	I0416 00:02:39.172144   30904 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHUsername
	I0416 00:02:39.172294   30904 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m04/id_rsa Username:docker}
	I0416 00:02:39.252439   30904 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:39.266272   30904 status.go:257] ha-694782-m04 status: &{Name:ha-694782-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr: exit status 7 (629.092495ms)

                                                
                                                
-- stdout --
	ha-694782
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-694782-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:02:49.503184   31009 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:02:49.503281   31009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:02:49.503285   31009 out.go:304] Setting ErrFile to fd 2...
	I0416 00:02:49.503290   31009 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:02:49.503461   31009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:02:49.503615   31009 out.go:298] Setting JSON to false
	I0416 00:02:49.503638   31009 mustload.go:65] Loading cluster: ha-694782
	I0416 00:02:49.503691   31009 notify.go:220] Checking for updates...
	I0416 00:02:49.504047   31009 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:02:49.504063   31009 status.go:255] checking status of ha-694782 ...
	I0416 00:02:49.504499   31009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:49.504564   31009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:49.518928   31009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42667
	I0416 00:02:49.519381   31009 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:49.519886   31009 main.go:141] libmachine: Using API Version  1
	I0416 00:02:49.519906   31009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:49.520256   31009 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:49.520447   31009 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0416 00:02:49.521871   31009 status.go:330] ha-694782 host status = "Running" (err=<nil>)
	I0416 00:02:49.521885   31009 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:02:49.522150   31009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:49.522179   31009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:49.536186   31009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41661
	I0416 00:02:49.536551   31009 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:49.536956   31009 main.go:141] libmachine: Using API Version  1
	I0416 00:02:49.536975   31009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:49.537324   31009 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:49.537508   31009 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0416 00:02:49.540168   31009 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:49.540589   31009 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:02:49.540608   31009 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:49.540741   31009 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:02:49.541010   31009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:49.541070   31009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:49.555125   31009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45425
	I0416 00:02:49.555504   31009 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:49.556028   31009 main.go:141] libmachine: Using API Version  1
	I0416 00:02:49.556049   31009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:49.556395   31009 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:49.556578   31009 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:02:49.556750   31009 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:49.556776   31009 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:02:49.559355   31009 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:49.559725   31009 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:02:49.559747   31009 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:02:49.559899   31009 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:02:49.560051   31009 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:02:49.560160   31009 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:02:49.560292   31009 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0416 00:02:49.658398   31009 ssh_runner.go:195] Run: systemctl --version
	I0416 00:02:49.667770   31009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:49.682982   31009 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:02:49.683020   31009 api_server.go:166] Checking apiserver status ...
	I0416 00:02:49.683049   31009 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:02:49.697669   31009 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup
	W0416 00:02:49.707547   31009 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1182/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:02:49.707602   31009 ssh_runner.go:195] Run: ls
	I0416 00:02:49.712245   31009 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:02:49.716320   31009 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:02:49.716342   31009 status.go:422] ha-694782 apiserver status = Running (err=<nil>)
	I0416 00:02:49.716352   31009 status.go:257] ha-694782 status: &{Name:ha-694782 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:02:49.716381   31009 status.go:255] checking status of ha-694782-m02 ...
	I0416 00:02:49.716663   31009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:49.716720   31009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:49.731626   31009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36539
	I0416 00:02:49.731999   31009 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:49.732428   31009 main.go:141] libmachine: Using API Version  1
	I0416 00:02:49.732446   31009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:49.732723   31009 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:49.732882   31009 main.go:141] libmachine: (ha-694782-m02) Calling .GetState
	I0416 00:02:49.734357   31009 status.go:330] ha-694782-m02 host status = "Stopped" (err=<nil>)
	I0416 00:02:49.734373   31009 status.go:343] host is not running, skipping remaining checks
	I0416 00:02:49.734380   31009 status.go:257] ha-694782-m02 status: &{Name:ha-694782-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:02:49.734394   31009 status.go:255] checking status of ha-694782-m03 ...
	I0416 00:02:49.734691   31009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:49.734737   31009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:49.748709   31009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38003
	I0416 00:02:49.749054   31009 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:49.749539   31009 main.go:141] libmachine: Using API Version  1
	I0416 00:02:49.749571   31009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:49.749813   31009 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:49.749983   31009 main.go:141] libmachine: (ha-694782-m03) Calling .GetState
	I0416 00:02:49.751510   31009 status.go:330] ha-694782-m03 host status = "Running" (err=<nil>)
	I0416 00:02:49.751541   31009 host.go:66] Checking if "ha-694782-m03" exists ...
	I0416 00:02:49.751940   31009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:49.751983   31009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:49.766599   31009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35571
	I0416 00:02:49.766989   31009 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:49.767412   31009 main.go:141] libmachine: Using API Version  1
	I0416 00:02:49.767433   31009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:49.767752   31009 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:49.767964   31009 main.go:141] libmachine: (ha-694782-m03) Calling .GetIP
	I0416 00:02:49.771403   31009 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:49.771625   31009 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0416 00:02:49.771649   31009 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:49.771772   31009 host.go:66] Checking if "ha-694782-m03" exists ...
	I0416 00:02:49.772110   31009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:49.772144   31009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:49.785976   31009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34237
	I0416 00:02:49.786336   31009 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:49.786766   31009 main.go:141] libmachine: Using API Version  1
	I0416 00:02:49.786780   31009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:49.787089   31009 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:49.787248   31009 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0416 00:02:49.787437   31009 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:49.787462   31009 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0416 00:02:49.790044   31009 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:49.790461   31009 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0416 00:02:49.790496   31009 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:49.790664   31009 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0416 00:02:49.790818   31009 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0416 00:02:49.790954   31009 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0416 00:02:49.791097   31009 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa Username:docker}
	I0416 00:02:49.876708   31009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:49.891617   31009 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:02:49.891641   31009 api_server.go:166] Checking apiserver status ...
	I0416 00:02:49.891670   31009 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:02:49.905531   31009 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1540/cgroup
	W0416 00:02:49.915468   31009 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1540/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:02:49.915525   31009 ssh_runner.go:195] Run: ls
	I0416 00:02:49.920701   31009 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:02:49.925283   31009 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:02:49.925302   31009 status.go:422] ha-694782-m03 apiserver status = Running (err=<nil>)
	I0416 00:02:49.925310   31009 status.go:257] ha-694782-m03 status: &{Name:ha-694782-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:02:49.925328   31009 status.go:255] checking status of ha-694782-m04 ...
	I0416 00:02:49.925681   31009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:49.925720   31009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:49.939942   31009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40657
	I0416 00:02:49.940336   31009 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:49.940719   31009 main.go:141] libmachine: Using API Version  1
	I0416 00:02:49.940740   31009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:49.941093   31009 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:49.941309   31009 main.go:141] libmachine: (ha-694782-m04) Calling .GetState
	I0416 00:02:49.943015   31009 status.go:330] ha-694782-m04 host status = "Running" (err=<nil>)
	I0416 00:02:49.943034   31009 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:02:49.943329   31009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:49.943367   31009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:49.957316   31009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35639
	I0416 00:02:49.957693   31009 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:49.958113   31009 main.go:141] libmachine: Using API Version  1
	I0416 00:02:49.958136   31009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:49.958455   31009 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:49.958620   31009 main.go:141] libmachine: (ha-694782-m04) Calling .GetIP
	I0416 00:02:49.961339   31009 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:49.961825   31009 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:58:43 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:02:49.961866   31009 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:49.961984   31009 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:02:49.962263   31009 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:49.962294   31009 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:49.976288   31009 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42323
	I0416 00:02:49.976653   31009 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:49.977076   31009 main.go:141] libmachine: Using API Version  1
	I0416 00:02:49.977096   31009 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:49.977438   31009 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:49.977630   31009 main.go:141] libmachine: (ha-694782-m04) Calling .DriverName
	I0416 00:02:49.977793   31009 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:02:49.977809   31009 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHHostname
	I0416 00:02:49.980221   31009 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:49.980649   31009 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:58:43 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:02:49.980684   31009 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:49.980804   31009 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHPort
	I0416 00:02:49.981000   31009 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHKeyPath
	I0416 00:02:49.981135   31009 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHUsername
	I0416 00:02:49.981300   31009 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m04/id_rsa Username:docker}
	I0416 00:02:50.060261   31009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:02:50.074634   31009 status.go:257] ha-694782-m04 status: &{Name:ha-694782-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-694782 -n ha-694782
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-694782 logs -n 25: (1.377010043s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m03:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782:/home/docker/cp-test_ha-694782-m03_ha-694782.txt                       |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782 sudo cat                                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m03_ha-694782.txt                                 |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m03:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m02:/home/docker/cp-test_ha-694782-m03_ha-694782-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782-m02 sudo cat                                          | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m03_ha-694782-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m03:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04:/home/docker/cp-test_ha-694782-m03_ha-694782-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782-m04 sudo cat                                          | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m03_ha-694782-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-694782 cp testdata/cp-test.txt                                                | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4178900617/001/cp-test_ha-694782-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782:/home/docker/cp-test_ha-694782-m04_ha-694782.txt                       |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782 sudo cat                                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m04_ha-694782.txt                                 |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m02:/home/docker/cp-test_ha-694782-m04_ha-694782-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782-m02 sudo cat                                          | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m04_ha-694782-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m03:/home/docker/cp-test_ha-694782-m04_ha-694782-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782-m03 sudo cat                                          | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m04_ha-694782-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-694782 node stop m02 -v=7                                                     | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-694782 node start m02 -v=7                                                    | ha-694782 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 23:54:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 23:54:50.606130   25488 out.go:291] Setting OutFile to fd 1 ...
	I0415 23:54:50.606240   25488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:54:50.606248   25488 out.go:304] Setting ErrFile to fd 2...
	I0415 23:54:50.606252   25488 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:54:50.606460   25488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0415 23:54:50.607004   25488 out.go:298] Setting JSON to false
	I0415 23:54:50.607793   25488 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2235,"bootTime":1713223056,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 23:54:50.607851   25488 start.go:139] virtualization: kvm guest
	I0415 23:54:50.610026   25488 out.go:177] * [ha-694782] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0415 23:54:50.611788   25488 notify.go:220] Checking for updates...
	I0415 23:54:50.611805   25488 out.go:177]   - MINIKUBE_LOCATION=18647
	I0415 23:54:50.613178   25488 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 23:54:50.614591   25488 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0415 23:54:50.615907   25488 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:54:50.617172   25488 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0415 23:54:50.618341   25488 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 23:54:50.619658   25488 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 23:54:50.652307   25488 out.go:177] * Using the kvm2 driver based on user configuration
	I0415 23:54:50.653739   25488 start.go:297] selected driver: kvm2
	I0415 23:54:50.653767   25488 start.go:901] validating driver "kvm2" against <nil>
	I0415 23:54:50.653785   25488 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 23:54:50.654543   25488 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 23:54:50.654633   25488 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0415 23:54:50.668711   25488 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0415 23:54:50.668755   25488 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 23:54:50.669017   25488 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 23:54:50.669103   25488 cni.go:84] Creating CNI manager for ""
	I0415 23:54:50.669120   25488 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0415 23:54:50.669126   25488 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0415 23:54:50.669204   25488 start.go:340] cluster config:
	{Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0415 23:54:50.669347   25488 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 23:54:50.671062   25488 out.go:177] * Starting "ha-694782" primary control-plane node in "ha-694782" cluster
	I0415 23:54:50.672327   25488 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0415 23:54:50.672366   25488 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0415 23:54:50.672378   25488 cache.go:56] Caching tarball of preloaded images
	I0415 23:54:50.672455   25488 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0415 23:54:50.672467   25488 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0415 23:54:50.672859   25488 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0415 23:54:50.672882   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json: {Name:mkfb3d47f0b66cecdcf38640e2fb461a34cd00df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:54:50.673031   25488 start.go:360] acquireMachinesLock for ha-694782: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 23:54:50.673061   25488 start.go:364] duration metric: took 16.312µs to acquireMachinesLock for "ha-694782"
	I0415 23:54:50.673077   25488 start.go:93] Provisioning new machine with config: &{Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0415 23:54:50.673135   25488 start.go:125] createHost starting for "" (driver="kvm2")
	I0415 23:54:50.674828   25488 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 23:54:50.674949   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:54:50.674981   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:54:50.688574   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36705
	I0415 23:54:50.688970   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:54:50.689477   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:54:50.689501   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:54:50.689786   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:54:50.689950   25488 main.go:141] libmachine: (ha-694782) Calling .GetMachineName
	I0415 23:54:50.690098   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:54:50.690208   25488 start.go:159] libmachine.API.Create for "ha-694782" (driver="kvm2")
	I0415 23:54:50.690237   25488 client.go:168] LocalClient.Create starting
	I0415 23:54:50.690266   25488 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem
	I0415 23:54:50.690295   25488 main.go:141] libmachine: Decoding PEM data...
	I0415 23:54:50.690308   25488 main.go:141] libmachine: Parsing certificate...
	I0415 23:54:50.690361   25488 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem
	I0415 23:54:50.690379   25488 main.go:141] libmachine: Decoding PEM data...
	I0415 23:54:50.690389   25488 main.go:141] libmachine: Parsing certificate...
	I0415 23:54:50.690411   25488 main.go:141] libmachine: Running pre-create checks...
	I0415 23:54:50.690420   25488 main.go:141] libmachine: (ha-694782) Calling .PreCreateCheck
	I0415 23:54:50.690761   25488 main.go:141] libmachine: (ha-694782) Calling .GetConfigRaw
	I0415 23:54:50.691108   25488 main.go:141] libmachine: Creating machine...
	I0415 23:54:50.691121   25488 main.go:141] libmachine: (ha-694782) Calling .Create
	I0415 23:54:50.691235   25488 main.go:141] libmachine: (ha-694782) Creating KVM machine...
	I0415 23:54:50.692164   25488 main.go:141] libmachine: (ha-694782) DBG | found existing default KVM network
	I0415 23:54:50.692735   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:50.692624   25511 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0415 23:54:50.692765   25488 main.go:141] libmachine: (ha-694782) DBG | created network xml: 
	I0415 23:54:50.692781   25488 main.go:141] libmachine: (ha-694782) DBG | <network>
	I0415 23:54:50.692787   25488 main.go:141] libmachine: (ha-694782) DBG |   <name>mk-ha-694782</name>
	I0415 23:54:50.692792   25488 main.go:141] libmachine: (ha-694782) DBG |   <dns enable='no'/>
	I0415 23:54:50.692796   25488 main.go:141] libmachine: (ha-694782) DBG |   
	I0415 23:54:50.692805   25488 main.go:141] libmachine: (ha-694782) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0415 23:54:50.692831   25488 main.go:141] libmachine: (ha-694782) DBG |     <dhcp>
	I0415 23:54:50.692852   25488 main.go:141] libmachine: (ha-694782) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0415 23:54:50.692860   25488 main.go:141] libmachine: (ha-694782) DBG |     </dhcp>
	I0415 23:54:50.692871   25488 main.go:141] libmachine: (ha-694782) DBG |   </ip>
	I0415 23:54:50.692880   25488 main.go:141] libmachine: (ha-694782) DBG |   
	I0415 23:54:50.692898   25488 main.go:141] libmachine: (ha-694782) DBG | </network>
	I0415 23:54:50.692937   25488 main.go:141] libmachine: (ha-694782) DBG | 
	I0415 23:54:50.697386   25488 main.go:141] libmachine: (ha-694782) DBG | trying to create private KVM network mk-ha-694782 192.168.39.0/24...
	I0415 23:54:50.759459   25488 main.go:141] libmachine: (ha-694782) DBG | private KVM network mk-ha-694782 192.168.39.0/24 created
	I0415 23:54:50.759488   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:50.759414   25511 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:54:50.759622   25488 main.go:141] libmachine: (ha-694782) Setting up store path in /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782 ...
	I0415 23:54:50.759652   25488 main.go:141] libmachine: (ha-694782) Building disk image from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso
	I0415 23:54:50.759686   25488 main.go:141] libmachine: (ha-694782) Downloading /home/jenkins/minikube-integration/18647-7542/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 23:54:50.983326   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:50.983177   25511 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa...
	I0415 23:54:51.195175   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:51.195055   25511 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/ha-694782.rawdisk...
	I0415 23:54:51.195206   25488 main.go:141] libmachine: (ha-694782) DBG | Writing magic tar header
	I0415 23:54:51.195217   25488 main.go:141] libmachine: (ha-694782) DBG | Writing SSH key tar header
	I0415 23:54:51.195228   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:51.195162   25511 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782 ...
	I0415 23:54:51.195241   25488 main.go:141] libmachine: (ha-694782) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782
	I0415 23:54:51.195349   25488 main.go:141] libmachine: (ha-694782) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines
	I0415 23:54:51.195372   25488 main.go:141] libmachine: (ha-694782) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782 (perms=drwx------)
	I0415 23:54:51.195379   25488 main.go:141] libmachine: (ha-694782) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:54:51.195389   25488 main.go:141] libmachine: (ha-694782) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542
	I0415 23:54:51.195395   25488 main.go:141] libmachine: (ha-694782) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0415 23:54:51.195406   25488 main.go:141] libmachine: (ha-694782) DBG | Checking permissions on dir: /home/jenkins
	I0415 23:54:51.195411   25488 main.go:141] libmachine: (ha-694782) DBG | Checking permissions on dir: /home
	I0415 23:54:51.195421   25488 main.go:141] libmachine: (ha-694782) DBG | Skipping /home - not owner
	I0415 23:54:51.195431   25488 main.go:141] libmachine: (ha-694782) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines (perms=drwxr-xr-x)
	I0415 23:54:51.195444   25488 main.go:141] libmachine: (ha-694782) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube (perms=drwxr-xr-x)
	I0415 23:54:51.195454   25488 main.go:141] libmachine: (ha-694782) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542 (perms=drwxrwxr-x)
	I0415 23:54:51.195466   25488 main.go:141] libmachine: (ha-694782) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0415 23:54:51.195475   25488 main.go:141] libmachine: (ha-694782) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0415 23:54:51.195486   25488 main.go:141] libmachine: (ha-694782) Creating domain...
	I0415 23:54:51.196527   25488 main.go:141] libmachine: (ha-694782) define libvirt domain using xml: 
	I0415 23:54:51.196566   25488 main.go:141] libmachine: (ha-694782) <domain type='kvm'>
	I0415 23:54:51.196577   25488 main.go:141] libmachine: (ha-694782)   <name>ha-694782</name>
	I0415 23:54:51.196589   25488 main.go:141] libmachine: (ha-694782)   <memory unit='MiB'>2200</memory>
	I0415 23:54:51.196600   25488 main.go:141] libmachine: (ha-694782)   <vcpu>2</vcpu>
	I0415 23:54:51.196611   25488 main.go:141] libmachine: (ha-694782)   <features>
	I0415 23:54:51.196623   25488 main.go:141] libmachine: (ha-694782)     <acpi/>
	I0415 23:54:51.196633   25488 main.go:141] libmachine: (ha-694782)     <apic/>
	I0415 23:54:51.196645   25488 main.go:141] libmachine: (ha-694782)     <pae/>
	I0415 23:54:51.196662   25488 main.go:141] libmachine: (ha-694782)     
	I0415 23:54:51.196696   25488 main.go:141] libmachine: (ha-694782)   </features>
	I0415 23:54:51.196719   25488 main.go:141] libmachine: (ha-694782)   <cpu mode='host-passthrough'>
	I0415 23:54:51.196733   25488 main.go:141] libmachine: (ha-694782)   
	I0415 23:54:51.196743   25488 main.go:141] libmachine: (ha-694782)   </cpu>
	I0415 23:54:51.196751   25488 main.go:141] libmachine: (ha-694782)   <os>
	I0415 23:54:51.196763   25488 main.go:141] libmachine: (ha-694782)     <type>hvm</type>
	I0415 23:54:51.196773   25488 main.go:141] libmachine: (ha-694782)     <boot dev='cdrom'/>
	I0415 23:54:51.196785   25488 main.go:141] libmachine: (ha-694782)     <boot dev='hd'/>
	I0415 23:54:51.196799   25488 main.go:141] libmachine: (ha-694782)     <bootmenu enable='no'/>
	I0415 23:54:51.196814   25488 main.go:141] libmachine: (ha-694782)   </os>
	I0415 23:54:51.196826   25488 main.go:141] libmachine: (ha-694782)   <devices>
	I0415 23:54:51.196839   25488 main.go:141] libmachine: (ha-694782)     <disk type='file' device='cdrom'>
	I0415 23:54:51.196856   25488 main.go:141] libmachine: (ha-694782)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/boot2docker.iso'/>
	I0415 23:54:51.196869   25488 main.go:141] libmachine: (ha-694782)       <target dev='hdc' bus='scsi'/>
	I0415 23:54:51.196882   25488 main.go:141] libmachine: (ha-694782)       <readonly/>
	I0415 23:54:51.196901   25488 main.go:141] libmachine: (ha-694782)     </disk>
	I0415 23:54:51.196915   25488 main.go:141] libmachine: (ha-694782)     <disk type='file' device='disk'>
	I0415 23:54:51.196927   25488 main.go:141] libmachine: (ha-694782)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0415 23:54:51.196941   25488 main.go:141] libmachine: (ha-694782)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/ha-694782.rawdisk'/>
	I0415 23:54:51.196968   25488 main.go:141] libmachine: (ha-694782)       <target dev='hda' bus='virtio'/>
	I0415 23:54:51.196989   25488 main.go:141] libmachine: (ha-694782)     </disk>
	I0415 23:54:51.197010   25488 main.go:141] libmachine: (ha-694782)     <interface type='network'>
	I0415 23:54:51.197023   25488 main.go:141] libmachine: (ha-694782)       <source network='mk-ha-694782'/>
	I0415 23:54:51.197031   25488 main.go:141] libmachine: (ha-694782)       <model type='virtio'/>
	I0415 23:54:51.197044   25488 main.go:141] libmachine: (ha-694782)     </interface>
	I0415 23:54:51.197055   25488 main.go:141] libmachine: (ha-694782)     <interface type='network'>
	I0415 23:54:51.197093   25488 main.go:141] libmachine: (ha-694782)       <source network='default'/>
	I0415 23:54:51.197113   25488 main.go:141] libmachine: (ha-694782)       <model type='virtio'/>
	I0415 23:54:51.197123   25488 main.go:141] libmachine: (ha-694782)     </interface>
	I0415 23:54:51.197134   25488 main.go:141] libmachine: (ha-694782)     <serial type='pty'>
	I0415 23:54:51.197146   25488 main.go:141] libmachine: (ha-694782)       <target port='0'/>
	I0415 23:54:51.197171   25488 main.go:141] libmachine: (ha-694782)     </serial>
	I0415 23:54:51.197184   25488 main.go:141] libmachine: (ha-694782)     <console type='pty'>
	I0415 23:54:51.197199   25488 main.go:141] libmachine: (ha-694782)       <target type='serial' port='0'/>
	I0415 23:54:51.197222   25488 main.go:141] libmachine: (ha-694782)     </console>
	I0415 23:54:51.197232   25488 main.go:141] libmachine: (ha-694782)     <rng model='virtio'>
	I0415 23:54:51.197246   25488 main.go:141] libmachine: (ha-694782)       <backend model='random'>/dev/random</backend>
	I0415 23:54:51.197256   25488 main.go:141] libmachine: (ha-694782)     </rng>
	I0415 23:54:51.197267   25488 main.go:141] libmachine: (ha-694782)     
	I0415 23:54:51.197277   25488 main.go:141] libmachine: (ha-694782)     
	I0415 23:54:51.197295   25488 main.go:141] libmachine: (ha-694782)   </devices>
	I0415 23:54:51.197313   25488 main.go:141] libmachine: (ha-694782) </domain>
	I0415 23:54:51.197328   25488 main.go:141] libmachine: (ha-694782) 
	I0415 23:54:51.201777   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:35:5b:51 in network default
	I0415 23:54:51.202454   25488 main.go:141] libmachine: (ha-694782) Ensuring networks are active...
	I0415 23:54:51.202474   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:51.203123   25488 main.go:141] libmachine: (ha-694782) Ensuring network default is active
	I0415 23:54:51.203409   25488 main.go:141] libmachine: (ha-694782) Ensuring network mk-ha-694782 is active
	I0415 23:54:51.203979   25488 main.go:141] libmachine: (ha-694782) Getting domain xml...
	I0415 23:54:51.204605   25488 main.go:141] libmachine: (ha-694782) Creating domain...
	I0415 23:54:52.375923   25488 main.go:141] libmachine: (ha-694782) Waiting to get IP...
	I0415 23:54:52.376780   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:52.377171   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:52.377193   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:52.377133   25511 retry.go:31] will retry after 224.827585ms: waiting for machine to come up
	I0415 23:54:52.603557   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:52.603998   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:52.604028   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:52.603944   25511 retry.go:31] will retry after 374.072733ms: waiting for machine to come up
	I0415 23:54:52.979256   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:52.979640   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:52.979666   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:52.979605   25511 retry.go:31] will retry after 418.209312ms: waiting for machine to come up
	I0415 23:54:53.399075   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:53.399504   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:53.399530   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:53.399477   25511 retry.go:31] will retry after 586.006563ms: waiting for machine to come up
	I0415 23:54:53.987292   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:53.987709   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:53.987737   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:53.987682   25511 retry.go:31] will retry after 585.019145ms: waiting for machine to come up
	I0415 23:54:54.574356   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:54.574841   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:54.574881   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:54.574744   25511 retry.go:31] will retry after 693.591633ms: waiting for machine to come up
	I0415 23:54:55.269527   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:55.269989   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:55.270019   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:55.269932   25511 retry.go:31] will retry after 952.212929ms: waiting for machine to come up
	I0415 23:54:56.223471   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:56.223979   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:56.224024   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:56.223944   25511 retry.go:31] will retry after 1.09753914s: waiting for machine to come up
	I0415 23:54:57.323068   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:57.323533   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:57.323562   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:57.323486   25511 retry.go:31] will retry after 1.219162056s: waiting for machine to come up
	I0415 23:54:58.544818   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:54:58.545234   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:54:58.545264   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:54:58.545190   25511 retry.go:31] will retry after 1.688054549s: waiting for machine to come up
	I0415 23:55:00.234436   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:00.234954   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:55:00.234978   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:55:00.234918   25511 retry.go:31] will retry after 2.111494169s: waiting for machine to come up
	I0415 23:55:02.349084   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:02.349555   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:55:02.349582   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:55:02.349515   25511 retry.go:31] will retry after 2.352035476s: waiting for machine to come up
	I0415 23:55:04.704991   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:04.705417   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:55:04.705465   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:55:04.705380   25511 retry.go:31] will retry after 4.46217908s: waiting for machine to come up
	I0415 23:55:09.171025   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:09.171427   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find current IP address of domain ha-694782 in network mk-ha-694782
	I0415 23:55:09.171457   25488 main.go:141] libmachine: (ha-694782) DBG | I0415 23:55:09.171373   25511 retry.go:31] will retry after 5.185782553s: waiting for machine to come up
	I0415 23:55:14.361556   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.362012   25488 main.go:141] libmachine: (ha-694782) Found IP for machine: 192.168.39.41
	I0415 23:55:14.362044   25488 main.go:141] libmachine: (ha-694782) Reserving static IP address...
	I0415 23:55:14.362062   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has current primary IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.362420   25488 main.go:141] libmachine: (ha-694782) DBG | unable to find host DHCP lease matching {name: "ha-694782", mac: "52:54:00:b4:cb:f8", ip: "192.168.39.41"} in network mk-ha-694782
	I0415 23:55:14.430861   25488 main.go:141] libmachine: (ha-694782) Reserved static IP address: 192.168.39.41
	I0415 23:55:14.430886   25488 main.go:141] libmachine: (ha-694782) Waiting for SSH to be available...
	I0415 23:55:14.430895   25488 main.go:141] libmachine: (ha-694782) DBG | Getting to WaitForSSH function...
	I0415 23:55:14.433318   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.433645   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:14.433674   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.433816   25488 main.go:141] libmachine: (ha-694782) DBG | Using SSH client type: external
	I0415 23:55:14.433837   25488 main.go:141] libmachine: (ha-694782) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa (-rw-------)
	I0415 23:55:14.433899   25488 main.go:141] libmachine: (ha-694782) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0415 23:55:14.433921   25488 main.go:141] libmachine: (ha-694782) DBG | About to run SSH command:
	I0415 23:55:14.433935   25488 main.go:141] libmachine: (ha-694782) DBG | exit 0
	I0415 23:55:14.565504   25488 main.go:141] libmachine: (ha-694782) DBG | SSH cmd err, output: <nil>: 
	I0415 23:55:14.565793   25488 main.go:141] libmachine: (ha-694782) KVM machine creation complete!
	I0415 23:55:14.566100   25488 main.go:141] libmachine: (ha-694782) Calling .GetConfigRaw
	I0415 23:55:14.566610   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:55:14.566767   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:55:14.566968   25488 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0415 23:55:14.566985   25488 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0415 23:55:14.568071   25488 main.go:141] libmachine: Detecting operating system of created instance...
	I0415 23:55:14.568085   25488 main.go:141] libmachine: Waiting for SSH to be available...
	I0415 23:55:14.568090   25488 main.go:141] libmachine: Getting to WaitForSSH function...
	I0415 23:55:14.568096   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:14.570429   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.570739   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:14.570789   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.570842   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:14.571017   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:14.571161   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:14.571312   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:14.571497   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:55:14.571722   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0415 23:55:14.571735   25488 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0415 23:55:14.684565   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 23:55:14.684586   25488 main.go:141] libmachine: Detecting the provisioner...
	I0415 23:55:14.684593   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:14.687560   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.687976   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:14.688024   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.688124   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:14.688336   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:14.688496   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:14.688650   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:14.688772   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:55:14.688924   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0415 23:55:14.688933   25488 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0415 23:55:14.802013   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0415 23:55:14.802112   25488 main.go:141] libmachine: found compatible host: buildroot
	I0415 23:55:14.802126   25488 main.go:141] libmachine: Provisioning with buildroot...
	I0415 23:55:14.802142   25488 main.go:141] libmachine: (ha-694782) Calling .GetMachineName
	I0415 23:55:14.802388   25488 buildroot.go:166] provisioning hostname "ha-694782"
	I0415 23:55:14.802411   25488 main.go:141] libmachine: (ha-694782) Calling .GetMachineName
	I0415 23:55:14.802594   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:14.804880   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.805196   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:14.805226   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.805346   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:14.805531   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:14.805686   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:14.805808   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:14.805939   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:55:14.806093   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0415 23:55:14.806105   25488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-694782 && echo "ha-694782" | sudo tee /etc/hostname
	I0415 23:55:14.939097   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-694782
	
	I0415 23:55:14.939122   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:14.941608   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.941926   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:14.941961   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:14.942109   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:14.942272   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:14.942416   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:14.942559   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:14.942714   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:55:14.942857   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0415 23:55:14.942872   25488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-694782' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-694782/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-694782' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 23:55:15.063677   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 23:55:15.063703   25488 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0415 23:55:15.063742   25488 buildroot.go:174] setting up certificates
	I0415 23:55:15.063750   25488 provision.go:84] configureAuth start
	I0415 23:55:15.063759   25488 main.go:141] libmachine: (ha-694782) Calling .GetMachineName
	I0415 23:55:15.064013   25488 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0415 23:55:15.066530   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.066860   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.066889   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.066993   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:15.069088   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.069416   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.069445   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.069606   25488 provision.go:143] copyHostCerts
	I0415 23:55:15.069633   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0415 23:55:15.069663   25488 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0415 23:55:15.069671   25488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0415 23:55:15.069735   25488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0415 23:55:15.069840   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0415 23:55:15.069859   25488 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0415 23:55:15.069864   25488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0415 23:55:15.069890   25488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0415 23:55:15.069983   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0415 23:55:15.070002   25488 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0415 23:55:15.070008   25488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0415 23:55:15.070030   25488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0415 23:55:15.070090   25488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.ha-694782 san=[127.0.0.1 192.168.39.41 ha-694782 localhost minikube]
	I0415 23:55:15.187615   25488 provision.go:177] copyRemoteCerts
	I0415 23:55:15.187670   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 23:55:15.187690   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:15.190182   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.190508   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.190534   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.190765   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:15.190934   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:15.191081   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:15.191241   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:55:15.275500   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0415 23:55:15.275557   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0415 23:55:15.300618   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0415 23:55:15.300671   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0415 23:55:15.324501   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0415 23:55:15.324558   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 23:55:15.353326   25488 provision.go:87] duration metric: took 289.565249ms to configureAuth
	I0415 23:55:15.353354   25488 buildroot.go:189] setting minikube options for container-runtime
	I0415 23:55:15.353553   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:55:15.353632   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:15.356289   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.356631   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.356654   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.356822   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:15.356967   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:15.357093   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:15.357254   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:15.357413   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:55:15.357562   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0415 23:55:15.357577   25488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0415 23:55:15.644902   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0415 23:55:15.644936   25488 main.go:141] libmachine: Checking connection to Docker...
	I0415 23:55:15.644946   25488 main.go:141] libmachine: (ha-694782) Calling .GetURL
	I0415 23:55:15.646292   25488 main.go:141] libmachine: (ha-694782) DBG | Using libvirt version 6000000
	I0415 23:55:15.648691   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.648986   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.649016   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.649207   25488 main.go:141] libmachine: Docker is up and running!
	I0415 23:55:15.649225   25488 main.go:141] libmachine: Reticulating splines...
	I0415 23:55:15.649233   25488 client.go:171] duration metric: took 24.958985907s to LocalClient.Create
	I0415 23:55:15.649255   25488 start.go:167] duration metric: took 24.959056749s to libmachine.API.Create "ha-694782"
	I0415 23:55:15.649267   25488 start.go:293] postStartSetup for "ha-694782" (driver="kvm2")
	I0415 23:55:15.649283   25488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 23:55:15.649303   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:55:15.649576   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 23:55:15.649615   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:15.651796   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.652094   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.652124   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.652232   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:15.652375   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:15.652489   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:15.652562   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:55:15.739914   25488 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 23:55:15.743991   25488 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 23:55:15.744009   25488 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0415 23:55:15.744093   25488 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0415 23:55:15.744167   25488 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0415 23:55:15.744176   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /etc/ssl/certs/148972.pem
	I0415 23:55:15.744265   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 23:55:15.754262   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0415 23:55:15.779399   25488 start.go:296] duration metric: took 130.115766ms for postStartSetup
	I0415 23:55:15.779460   25488 main.go:141] libmachine: (ha-694782) Calling .GetConfigRaw
	I0415 23:55:15.780056   25488 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0415 23:55:15.782419   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.782804   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.782825   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.783057   25488 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0415 23:55:15.783234   25488 start.go:128] duration metric: took 25.110089598s to createHost
	I0415 23:55:15.783255   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:15.785447   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.785727   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.785747   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.785876   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:15.786056   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:15.786216   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:15.786372   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:15.786532   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:55:15.786679   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0415 23:55:15.786693   25488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 23:55:15.897712   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713225315.873046844
	
	I0415 23:55:15.897732   25488 fix.go:216] guest clock: 1713225315.873046844
	I0415 23:55:15.897738   25488 fix.go:229] Guest: 2024-04-15 23:55:15.873046844 +0000 UTC Remote: 2024-04-15 23:55:15.78324668 +0000 UTC m=+25.222880995 (delta=89.800164ms)
	I0415 23:55:15.897755   25488 fix.go:200] guest clock delta is within tolerance: 89.800164ms
	I0415 23:55:15.897760   25488 start.go:83] releasing machines lock for "ha-694782", held for 25.224690951s
	I0415 23:55:15.897776   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:55:15.898024   25488 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0415 23:55:15.900296   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.900562   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.900584   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.900703   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:55:15.901150   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:55:15.901336   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:55:15.901390   25488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 23:55:15.901416   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:15.901532   25488 ssh_runner.go:195] Run: cat /version.json
	I0415 23:55:15.901552   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:15.904140   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.904239   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.904474   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.904498   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.904520   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:15.904576   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:15.904628   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:15.904810   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:15.904827   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:15.904972   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:15.904979   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:15.905139   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:15.905150   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:55:15.905287   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:55:15.986278   25488 ssh_runner.go:195] Run: systemctl --version
	I0415 23:55:16.016341   25488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0415 23:55:16.177768   25488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 23:55:16.184471   25488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 23:55:16.184546   25488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 23:55:16.200414   25488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 23:55:16.200437   25488 start.go:494] detecting cgroup driver to use...
	I0415 23:55:16.200486   25488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 23:55:16.216228   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 23:55:16.230211   25488 docker.go:217] disabling cri-docker service (if available) ...
	I0415 23:55:16.230270   25488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0415 23:55:16.243548   25488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0415 23:55:16.256840   25488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0415 23:55:16.378336   25488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0415 23:55:16.521593   25488 docker.go:233] disabling docker service ...
	I0415 23:55:16.521678   25488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0415 23:55:16.536397   25488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0415 23:55:16.549035   25488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0415 23:55:16.681131   25488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0415 23:55:16.806474   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0415 23:55:16.820636   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 23:55:16.839039   25488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0415 23:55:16.839089   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:55:16.848913   25488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0415 23:55:16.848969   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:55:16.859109   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:55:16.869053   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:55:16.879029   25488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 23:55:16.889245   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:55:16.899207   25488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:55:16.916484   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:55:16.926771   25488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 23:55:16.936287   25488 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0415 23:55:16.936361   25488 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0415 23:55:16.950617   25488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 23:55:16.962389   25488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:55:17.097679   25488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0415 23:55:17.232809   25488 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0415 23:55:17.232871   25488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0415 23:55:17.237690   25488 start.go:562] Will wait 60s for crictl version
	I0415 23:55:17.237789   25488 ssh_runner.go:195] Run: which crictl
	I0415 23:55:17.241636   25488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 23:55:17.280999   25488 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0415 23:55:17.281065   25488 ssh_runner.go:195] Run: crio --version
	I0415 23:55:17.309439   25488 ssh_runner.go:195] Run: crio --version
	I0415 23:55:17.339564   25488 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0415 23:55:17.340837   25488 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0415 23:55:17.343246   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:17.343523   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:17.343549   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:17.343708   25488 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0415 23:55:17.348000   25488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 23:55:17.363205   25488 kubeadm.go:877] updating cluster {Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0415 23:55:17.363323   25488 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0415 23:55:17.363376   25488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0415 23:55:17.402360   25488 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0415 23:55:17.402437   25488 ssh_runner.go:195] Run: which lz4
	I0415 23:55:17.406895   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0415 23:55:17.406975   25488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0415 23:55:17.411512   25488 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0415 23:55:17.411539   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0415 23:55:18.881825   25488 crio.go:462] duration metric: took 1.474872286s to copy over tarball
	I0415 23:55:18.881923   25488 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0415 23:55:21.112954   25488 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.230999706s)
	I0415 23:55:21.112993   25488 crio.go:469] duration metric: took 2.231139178s to extract the tarball
	I0415 23:55:21.113002   25488 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0415 23:55:21.150983   25488 ssh_runner.go:195] Run: sudo crictl images --output json
	I0415 23:55:21.197262   25488 crio.go:514] all images are preloaded for cri-o runtime.
	I0415 23:55:21.197287   25488 cache_images.go:84] Images are preloaded, skipping loading
	I0415 23:55:21.197294   25488 kubeadm.go:928] updating node { 192.168.39.41 8443 v1.29.3 crio true true} ...
	I0415 23:55:21.197411   25488 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-694782 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 23:55:21.197502   25488 ssh_runner.go:195] Run: crio config
	I0415 23:55:21.248535   25488 cni.go:84] Creating CNI manager for ""
	I0415 23:55:21.248557   25488 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 23:55:21.248567   25488 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0415 23:55:21.248591   25488 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.41 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-694782 NodeName:ha-694782 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0415 23:55:21.248721   25488 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-694782"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.41
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.41"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0415 23:55:21.248752   25488 kube-vip.go:111] generating kube-vip config ...
	I0415 23:55:21.248795   25488 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0415 23:55:21.264955   25488 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0415 23:55:21.265054   25488 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0415 23:55:21.265099   25488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 23:55:21.275626   25488 binaries.go:44] Found k8s binaries, skipping transfer
	I0415 23:55:21.275683   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0415 23:55:21.285586   25488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0415 23:55:21.302311   25488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 23:55:21.318800   25488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0415 23:55:21.335730   25488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0415 23:55:21.353231   25488 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0415 23:55:21.357247   25488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 23:55:21.369999   25488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:55:21.481623   25488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 23:55:21.499102   25488 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782 for IP: 192.168.39.41
	I0415 23:55:21.499128   25488 certs.go:194] generating shared ca certs ...
	I0415 23:55:21.499170   25488 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:55:21.499354   25488 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0415 23:55:21.499419   25488 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0415 23:55:21.499432   25488 certs.go:256] generating profile certs ...
	I0415 23:55:21.499496   25488 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.key
	I0415 23:55:21.499515   25488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.crt with IP's: []
	I0415 23:55:21.625470   25488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.crt ...
	I0415 23:55:21.625500   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.crt: {Name:mk07a742d69663069eab99b3131081c62709ce45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:55:21.625669   25488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.key ...
	I0415 23:55:21.625681   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.key: {Name:mk3ccbb0986e351adb4bf32ff85ba606547db2f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:55:21.625754   25488 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.76980cb2
	I0415 23:55:21.625782   25488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.76980cb2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.41 192.168.39.254]
	I0415 23:55:21.818979   25488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.76980cb2 ...
	I0415 23:55:21.819005   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.76980cb2: {Name:mk6dcc18833f5ae29fe38a46dbdc51cffe578362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:55:21.819161   25488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.76980cb2 ...
	I0415 23:55:21.819176   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.76980cb2: {Name:mkc0beca4f2d7056d3c179d658e3ee6f22c7efc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:55:21.819241   25488 certs.go:381] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.76980cb2 -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt
	I0415 23:55:21.819337   25488 certs.go:385] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.76980cb2 -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key
	I0415 23:55:21.819397   25488 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key
	I0415 23:55:21.819413   25488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt with IP's: []
	I0415 23:55:21.908762   25488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt ...
	I0415 23:55:21.908790   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt: {Name:mkb0c783b3e2cc7bed15cd5d531f54fc8713aa8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:55:21.908929   25488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key ...
	I0415 23:55:21.908941   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key: {Name:mke1b4ddd8ce41af36e1be15fd39f5382986b8b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:55:21.909003   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 23:55:21.909019   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0415 23:55:21.909029   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 23:55:21.909045   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 23:55:21.909057   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0415 23:55:21.909076   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0415 23:55:21.909088   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0415 23:55:21.909099   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0415 23:55:21.909142   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0415 23:55:21.909202   25488 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0415 23:55:21.909220   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0415 23:55:21.909245   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0415 23:55:21.909267   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0415 23:55:21.909295   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0415 23:55:21.909333   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0415 23:55:21.909364   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:55:21.909377   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem -> /usr/share/ca-certificates/14897.pem
	I0415 23:55:21.909389   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /usr/share/ca-certificates/148972.pem
	I0415 23:55:21.910036   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 23:55:21.937119   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 23:55:21.963049   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 23:55:21.988496   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 23:55:22.013553   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0415 23:55:22.038484   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 23:55:22.064058   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 23:55:22.090954   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 23:55:22.115549   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 23:55:22.139971   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0415 23:55:22.164018   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0415 23:55:22.189031   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0415 23:55:22.207248   25488 ssh_runner.go:195] Run: openssl version
	I0415 23:55:22.214233   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 23:55:22.226285   25488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:55:22.230711   25488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:55:22.230772   25488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:55:22.236792   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 23:55:22.247943   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0415 23:55:22.261256   25488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0415 23:55:22.265792   25488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0415 23:55:22.265863   25488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0415 23:55:22.276630   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0415 23:55:22.290556   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0415 23:55:22.302350   25488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0415 23:55:22.307320   25488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0415 23:55:22.307397   25488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0415 23:55:22.313425   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 23:55:22.324867   25488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 23:55:22.329473   25488 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 23:55:22.329522   25488 kubeadm.go:391] StartCluster: {Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:55:22.329588   25488 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0415 23:55:22.329635   25488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0415 23:55:22.367303   25488 cri.go:89] found id: ""
	I0415 23:55:22.367362   25488 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0415 23:55:22.378138   25488 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0415 23:55:22.388429   25488 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0415 23:55:22.398727   25488 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0415 23:55:22.398750   25488 kubeadm.go:156] found existing configuration files:
	
	I0415 23:55:22.398780   25488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0415 23:55:22.409087   25488 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0415 23:55:22.409141   25488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0415 23:55:22.419943   25488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0415 23:55:22.430269   25488 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0415 23:55:22.430316   25488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0415 23:55:22.440881   25488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0415 23:55:22.451037   25488 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0415 23:55:22.451088   25488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0415 23:55:22.461551   25488 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0415 23:55:22.471515   25488 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0415 23:55:22.471571   25488 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0415 23:55:22.482055   25488 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0415 23:55:22.580974   25488 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0415 23:55:22.581134   25488 kubeadm.go:309] [preflight] Running pre-flight checks
	I0415 23:55:22.704124   25488 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0415 23:55:22.704210   25488 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0415 23:55:22.704285   25488 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0415 23:55:22.916668   25488 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0415 23:55:23.065315   25488 out.go:204]   - Generating certificates and keys ...
	I0415 23:55:23.065451   25488 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0415 23:55:23.065581   25488 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0415 23:55:23.109151   25488 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0415 23:55:23.236029   25488 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0415 23:55:23.645284   25488 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0415 23:55:23.764926   25488 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0415 23:55:23.891122   25488 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0415 23:55:23.891393   25488 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-694782 localhost] and IPs [192.168.39.41 127.0.0.1 ::1]
	I0415 23:55:23.983169   25488 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0415 23:55:23.983446   25488 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-694782 localhost] and IPs [192.168.39.41 127.0.0.1 ::1]
	I0415 23:55:24.307111   25488 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0415 23:55:24.400815   25488 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0415 23:55:24.576282   25488 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0415 23:55:24.576563   25488 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0415 23:55:24.700406   25488 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0415 23:55:24.804967   25488 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0415 23:55:24.985122   25488 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0415 23:55:25.159528   25488 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0415 23:55:25.264556   25488 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0415 23:55:25.265189   25488 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0415 23:55:25.267976   25488 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0415 23:55:25.269979   25488 out.go:204]   - Booting up control plane ...
	I0415 23:55:25.270080   25488 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0415 23:55:25.270205   25488 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0415 23:55:25.272050   25488 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0415 23:55:25.288032   25488 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0415 23:55:25.289004   25488 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0415 23:55:25.289052   25488 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0415 23:55:25.417306   25488 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0415 23:55:32.015872   25488 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.601888 seconds
	I0415 23:55:32.028464   25488 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0415 23:55:32.041239   25488 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0415 23:55:32.574913   25488 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0415 23:55:32.575096   25488 kubeadm.go:309] [mark-control-plane] Marking the node ha-694782 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0415 23:55:33.088760   25488 kubeadm.go:309] [bootstrap-token] Using token: yi105q.89mspfuqu9h3wwqy
	I0415 23:55:33.090154   25488 out.go:204]   - Configuring RBAC rules ...
	I0415 23:55:33.090267   25488 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0415 23:55:33.095046   25488 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0415 23:55:33.107365   25488 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0415 23:55:33.112085   25488 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0415 23:55:33.116134   25488 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0415 23:55:33.119289   25488 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0415 23:55:33.133705   25488 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0415 23:55:33.409805   25488 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0415 23:55:33.500389   25488 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0415 23:55:33.506955   25488 kubeadm.go:309] 
	I0415 23:55:33.507021   25488 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0415 23:55:33.507050   25488 kubeadm.go:309] 
	I0415 23:55:33.507137   25488 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0415 23:55:33.507147   25488 kubeadm.go:309] 
	I0415 23:55:33.507168   25488 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0415 23:55:33.507345   25488 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0415 23:55:33.507423   25488 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0415 23:55:33.507436   25488 kubeadm.go:309] 
	I0415 23:55:33.507537   25488 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0415 23:55:33.507575   25488 kubeadm.go:309] 
	I0415 23:55:33.507650   25488 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0415 23:55:33.507661   25488 kubeadm.go:309] 
	I0415 23:55:33.507742   25488 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0415 23:55:33.507861   25488 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0415 23:55:33.507960   25488 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0415 23:55:33.507976   25488 kubeadm.go:309] 
	I0415 23:55:33.508090   25488 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0415 23:55:33.508210   25488 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0415 23:55:33.508226   25488 kubeadm.go:309] 
	I0415 23:55:33.508348   25488 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token yi105q.89mspfuqu9h3wwqy \
	I0415 23:55:33.508520   25488 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0415 23:55:33.508556   25488 kubeadm.go:309] 	--control-plane 
	I0415 23:55:33.508569   25488 kubeadm.go:309] 
	I0415 23:55:33.508681   25488 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0415 23:55:33.508694   25488 kubeadm.go:309] 
	I0415 23:55:33.509634   25488 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token yi105q.89mspfuqu9h3wwqy \
	I0415 23:55:33.509736   25488 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0415 23:55:33.512618   25488 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0415 23:55:33.512783   25488 cni.go:84] Creating CNI manager for ""
	I0415 23:55:33.512798   25488 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0415 23:55:33.514405   25488 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0415 23:55:33.515657   25488 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0415 23:55:33.525529   25488 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.29.3/kubectl ...
	I0415 23:55:33.525551   25488 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0415 23:55:33.571052   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0415 23:55:34.013192   25488 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0415 23:55:34.013271   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:34.013284   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-694782 minikube.k8s.io/updated_at=2024_04_15T23_55_34_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=ha-694782 minikube.k8s.io/primary=true
	I0415 23:55:34.137526   25488 ops.go:34] apiserver oom_adj: -16
	I0415 23:55:34.150039   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:34.650474   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:35.150065   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:35.651024   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:36.150487   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:36.650186   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:37.151071   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:37.650142   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:38.150779   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:38.651084   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:39.150734   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:39.650932   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:40.150234   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:40.650954   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:41.150314   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:41.650413   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:42.150322   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:42.650903   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:43.150869   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:43.650234   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:44.150706   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:44.650232   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:45.150496   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:45.650166   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0415 23:55:45.769665   25488 kubeadm.go:1107] duration metric: took 11.756461432s to wait for elevateKubeSystemPrivileges
	W0415 23:55:45.769708   25488 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0415 23:55:45.769716   25488 kubeadm.go:393] duration metric: took 23.440196109s to StartCluster
	I0415 23:55:45.769735   25488 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:55:45.769832   25488 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0415 23:55:45.770777   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:55:45.770999   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0415 23:55:45.771009   25488 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0415 23:55:45.771030   25488 start.go:240] waiting for startup goroutines ...
	I0415 23:55:45.771050   25488 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0415 23:55:45.771126   25488 addons.go:69] Setting storage-provisioner=true in profile "ha-694782"
	I0415 23:55:45.771136   25488 addons.go:69] Setting default-storageclass=true in profile "ha-694782"
	I0415 23:55:45.771171   25488 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-694782"
	I0415 23:55:45.771172   25488 addons.go:234] Setting addon storage-provisioner=true in "ha-694782"
	I0415 23:55:45.771282   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:55:45.771319   25488 host.go:66] Checking if "ha-694782" exists ...
	I0415 23:55:45.771665   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:55:45.771671   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:55:45.771691   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:55:45.771708   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:55:45.791793   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40765
	I0415 23:55:45.791916   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35107
	I0415 23:55:45.792241   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:55:45.792278   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:55:45.792783   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:55:45.792802   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:55:45.792906   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:55:45.792940   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:55:45.793151   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:55:45.793244   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:55:45.793435   25488 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0415 23:55:45.793818   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:55:45.793852   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:55:45.795550   25488 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0415 23:55:45.795808   25488 kapi.go:59] client config for ha-694782: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.crt", KeyFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.key", CAFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0415 23:55:45.796247   25488 cert_rotation.go:137] Starting client certificate rotation controller
	I0415 23:55:45.796415   25488 addons.go:234] Setting addon default-storageclass=true in "ha-694782"
	I0415 23:55:45.796466   25488 host.go:66] Checking if "ha-694782" exists ...
	I0415 23:55:45.796748   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:55:45.796777   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:55:45.808920   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43323
	I0415 23:55:45.809423   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:55:45.809893   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:55:45.809916   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:55:45.810234   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:55:45.810459   25488 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0415 23:55:45.811133   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44435
	I0415 23:55:45.811497   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:55:45.811979   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:55:45.811997   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:55:45.812277   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:55:45.812342   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:55:45.813905   25488 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0415 23:55:45.812842   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:55:45.815625   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:55:45.815744   25488 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 23:55:45.815765   25488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0415 23:55:45.815788   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:45.819266   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:45.819694   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:45.819724   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:45.819847   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:45.820014   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:45.820165   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:45.820329   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:55:45.830753   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I0415 23:55:45.831158   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:55:45.831586   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:55:45.831608   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:55:45.831948   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:55:45.832119   25488 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0415 23:55:45.833777   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:55:45.834049   25488 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0415 23:55:45.834068   25488 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0415 23:55:45.834086   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:55:45.836607   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:45.837022   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:55:45.837047   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:55:45.837227   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:55:45.837425   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:55:45.837576   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:55:45.837708   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:55:45.943610   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0415 23:55:46.052142   25488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0415 23:55:46.077445   25488 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0415 23:55:46.606986   25488 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0415 23:55:46.915986   25488 main.go:141] libmachine: Making call to close driver server
	I0415 23:55:46.916012   25488 main.go:141] libmachine: (ha-694782) Calling .Close
	I0415 23:55:46.915995   25488 main.go:141] libmachine: Making call to close driver server
	I0415 23:55:46.916065   25488 main.go:141] libmachine: (ha-694782) Calling .Close
	I0415 23:55:46.916297   25488 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:55:46.916318   25488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:55:46.916328   25488 main.go:141] libmachine: Making call to close driver server
	I0415 23:55:46.916336   25488 main.go:141] libmachine: (ha-694782) Calling .Close
	I0415 23:55:46.916303   25488 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:55:46.916364   25488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:55:46.916339   25488 main.go:141] libmachine: (ha-694782) DBG | Closing plugin on server side
	I0415 23:55:46.916376   25488 main.go:141] libmachine: Making call to close driver server
	I0415 23:55:46.916385   25488 main.go:141] libmachine: (ha-694782) Calling .Close
	I0415 23:55:46.916544   25488 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:55:46.916559   25488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:55:46.917639   25488 main.go:141] libmachine: (ha-694782) DBG | Closing plugin on server side
	I0415 23:55:46.917658   25488 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:55:46.917671   25488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:55:46.917772   25488 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0415 23:55:46.917785   25488 round_trippers.go:469] Request Headers:
	I0415 23:55:46.917796   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:55:46.917803   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:55:46.927026   25488 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0415 23:55:46.927833   25488 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0415 23:55:46.927852   25488 round_trippers.go:469] Request Headers:
	I0415 23:55:46.927863   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:55:46.927869   25488 round_trippers.go:473]     Content-Type: application/json
	I0415 23:55:46.927874   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:55:46.935690   25488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 23:55:46.935822   25488 main.go:141] libmachine: Making call to close driver server
	I0415 23:55:46.935856   25488 main.go:141] libmachine: (ha-694782) Calling .Close
	I0415 23:55:46.936139   25488 main.go:141] libmachine: Successfully made call to close driver server
	I0415 23:55:46.936158   25488 main.go:141] libmachine: Making call to close connection to plugin binary
	I0415 23:55:46.936153   25488 main.go:141] libmachine: (ha-694782) DBG | Closing plugin on server side
	I0415 23:55:46.937979   25488 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0415 23:55:46.939310   25488 addons.go:505] duration metric: took 1.168263927s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0415 23:55:46.939350   25488 start.go:245] waiting for cluster config update ...
	I0415 23:55:46.939369   25488 start.go:254] writing updated cluster config ...
	I0415 23:55:46.941139   25488 out.go:177] 
	I0415 23:55:46.942803   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:55:46.942906   25488 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0415 23:55:46.944817   25488 out.go:177] * Starting "ha-694782-m02" control-plane node in "ha-694782" cluster
	I0415 23:55:46.946354   25488 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0415 23:55:46.946380   25488 cache.go:56] Caching tarball of preloaded images
	I0415 23:55:46.946465   25488 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0415 23:55:46.946480   25488 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0415 23:55:46.946572   25488 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0415 23:55:46.946766   25488 start.go:360] acquireMachinesLock for ha-694782-m02: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 23:55:46.946825   25488 start.go:364] duration metric: took 34.719µs to acquireMachinesLock for "ha-694782-m02"
	I0415 23:55:46.946852   25488 start.go:93] Provisioning new machine with config: &{Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0415 23:55:46.946951   25488 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0415 23:55:46.948709   25488 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 23:55:46.948795   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:55:46.948830   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:55:46.963384   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37699
	I0415 23:55:46.963856   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:55:46.964274   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:55:46.964298   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:55:46.964655   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:55:46.964834   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetMachineName
	I0415 23:55:46.964984   25488 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0415 23:55:46.965207   25488 start.go:159] libmachine.API.Create for "ha-694782" (driver="kvm2")
	I0415 23:55:46.965231   25488 client.go:168] LocalClient.Create starting
	I0415 23:55:46.965266   25488 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem
	I0415 23:55:46.965309   25488 main.go:141] libmachine: Decoding PEM data...
	I0415 23:55:46.965328   25488 main.go:141] libmachine: Parsing certificate...
	I0415 23:55:46.965412   25488 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem
	I0415 23:55:46.965438   25488 main.go:141] libmachine: Decoding PEM data...
	I0415 23:55:46.965455   25488 main.go:141] libmachine: Parsing certificate...
	I0415 23:55:46.965481   25488 main.go:141] libmachine: Running pre-create checks...
	I0415 23:55:46.965492   25488 main.go:141] libmachine: (ha-694782-m02) Calling .PreCreateCheck
	I0415 23:55:46.965652   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetConfigRaw
	I0415 23:55:46.966051   25488 main.go:141] libmachine: Creating machine...
	I0415 23:55:46.966067   25488 main.go:141] libmachine: (ha-694782-m02) Calling .Create
	I0415 23:55:46.966197   25488 main.go:141] libmachine: (ha-694782-m02) Creating KVM machine...
	I0415 23:55:46.967580   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found existing default KVM network
	I0415 23:55:46.967731   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found existing private KVM network mk-ha-694782
	I0415 23:55:46.967895   25488 main.go:141] libmachine: (ha-694782-m02) Setting up store path in /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02 ...
	I0415 23:55:46.967930   25488 main.go:141] libmachine: (ha-694782-m02) Building disk image from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso
	I0415 23:55:46.968003   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:46.967897   25867 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:55:46.968119   25488 main.go:141] libmachine: (ha-694782-m02) Downloading /home/jenkins/minikube-integration/18647-7542/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 23:55:47.182385   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:47.182278   25867 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa...
	I0415 23:55:47.311844   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:47.311702   25867 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/ha-694782-m02.rawdisk...
	I0415 23:55:47.311880   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Writing magic tar header
	I0415 23:55:47.311896   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Writing SSH key tar header
	I0415 23:55:47.311909   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:47.311847   25867 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02 ...
	I0415 23:55:47.312081   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02
	I0415 23:55:47.312102   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines
	I0415 23:55:47.312116   25488 main.go:141] libmachine: (ha-694782-m02) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02 (perms=drwx------)
	I0415 23:55:47.312140   25488 main.go:141] libmachine: (ha-694782-m02) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines (perms=drwxr-xr-x)
	I0415 23:55:47.312156   25488 main.go:141] libmachine: (ha-694782-m02) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube (perms=drwxr-xr-x)
	I0415 23:55:47.312174   25488 main.go:141] libmachine: (ha-694782-m02) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542 (perms=drwxrwxr-x)
	I0415 23:55:47.312193   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:55:47.312207   25488 main.go:141] libmachine: (ha-694782-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0415 23:55:47.312223   25488 main.go:141] libmachine: (ha-694782-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0415 23:55:47.312235   25488 main.go:141] libmachine: (ha-694782-m02) Creating domain...
	I0415 23:55:47.312253   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542
	I0415 23:55:47.312272   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0415 23:55:47.312298   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Checking permissions on dir: /home/jenkins
	I0415 23:55:47.312335   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Checking permissions on dir: /home
	I0415 23:55:47.312351   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Skipping /home - not owner
	I0415 23:55:47.313126   25488 main.go:141] libmachine: (ha-694782-m02) define libvirt domain using xml: 
	I0415 23:55:47.313153   25488 main.go:141] libmachine: (ha-694782-m02) <domain type='kvm'>
	I0415 23:55:47.313180   25488 main.go:141] libmachine: (ha-694782-m02)   <name>ha-694782-m02</name>
	I0415 23:55:47.313192   25488 main.go:141] libmachine: (ha-694782-m02)   <memory unit='MiB'>2200</memory>
	I0415 23:55:47.313204   25488 main.go:141] libmachine: (ha-694782-m02)   <vcpu>2</vcpu>
	I0415 23:55:47.313210   25488 main.go:141] libmachine: (ha-694782-m02)   <features>
	I0415 23:55:47.313221   25488 main.go:141] libmachine: (ha-694782-m02)     <acpi/>
	I0415 23:55:47.313231   25488 main.go:141] libmachine: (ha-694782-m02)     <apic/>
	I0415 23:55:47.313243   25488 main.go:141] libmachine: (ha-694782-m02)     <pae/>
	I0415 23:55:47.313252   25488 main.go:141] libmachine: (ha-694782-m02)     
	I0415 23:55:47.313265   25488 main.go:141] libmachine: (ha-694782-m02)   </features>
	I0415 23:55:47.313280   25488 main.go:141] libmachine: (ha-694782-m02)   <cpu mode='host-passthrough'>
	I0415 23:55:47.313305   25488 main.go:141] libmachine: (ha-694782-m02)   
	I0415 23:55:47.313317   25488 main.go:141] libmachine: (ha-694782-m02)   </cpu>
	I0415 23:55:47.313325   25488 main.go:141] libmachine: (ha-694782-m02)   <os>
	I0415 23:55:47.313336   25488 main.go:141] libmachine: (ha-694782-m02)     <type>hvm</type>
	I0415 23:55:47.313347   25488 main.go:141] libmachine: (ha-694782-m02)     <boot dev='cdrom'/>
	I0415 23:55:47.313360   25488 main.go:141] libmachine: (ha-694782-m02)     <boot dev='hd'/>
	I0415 23:55:47.313369   25488 main.go:141] libmachine: (ha-694782-m02)     <bootmenu enable='no'/>
	I0415 23:55:47.313378   25488 main.go:141] libmachine: (ha-694782-m02)   </os>
	I0415 23:55:47.313386   25488 main.go:141] libmachine: (ha-694782-m02)   <devices>
	I0415 23:55:47.313397   25488 main.go:141] libmachine: (ha-694782-m02)     <disk type='file' device='cdrom'>
	I0415 23:55:47.313414   25488 main.go:141] libmachine: (ha-694782-m02)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/boot2docker.iso'/>
	I0415 23:55:47.313425   25488 main.go:141] libmachine: (ha-694782-m02)       <target dev='hdc' bus='scsi'/>
	I0415 23:55:47.313449   25488 main.go:141] libmachine: (ha-694782-m02)       <readonly/>
	I0415 23:55:47.313467   25488 main.go:141] libmachine: (ha-694782-m02)     </disk>
	I0415 23:55:47.313496   25488 main.go:141] libmachine: (ha-694782-m02)     <disk type='file' device='disk'>
	I0415 23:55:47.313521   25488 main.go:141] libmachine: (ha-694782-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0415 23:55:47.313540   25488 main.go:141] libmachine: (ha-694782-m02)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/ha-694782-m02.rawdisk'/>
	I0415 23:55:47.313552   25488 main.go:141] libmachine: (ha-694782-m02)       <target dev='hda' bus='virtio'/>
	I0415 23:55:47.313565   25488 main.go:141] libmachine: (ha-694782-m02)     </disk>
	I0415 23:55:47.313577   25488 main.go:141] libmachine: (ha-694782-m02)     <interface type='network'>
	I0415 23:55:47.313591   25488 main.go:141] libmachine: (ha-694782-m02)       <source network='mk-ha-694782'/>
	I0415 23:55:47.313606   25488 main.go:141] libmachine: (ha-694782-m02)       <model type='virtio'/>
	I0415 23:55:47.313619   25488 main.go:141] libmachine: (ha-694782-m02)     </interface>
	I0415 23:55:47.313630   25488 main.go:141] libmachine: (ha-694782-m02)     <interface type='network'>
	I0415 23:55:47.313644   25488 main.go:141] libmachine: (ha-694782-m02)       <source network='default'/>
	I0415 23:55:47.313655   25488 main.go:141] libmachine: (ha-694782-m02)       <model type='virtio'/>
	I0415 23:55:47.313669   25488 main.go:141] libmachine: (ha-694782-m02)     </interface>
	I0415 23:55:47.313684   25488 main.go:141] libmachine: (ha-694782-m02)     <serial type='pty'>
	I0415 23:55:47.313698   25488 main.go:141] libmachine: (ha-694782-m02)       <target port='0'/>
	I0415 23:55:47.313708   25488 main.go:141] libmachine: (ha-694782-m02)     </serial>
	I0415 23:55:47.313725   25488 main.go:141] libmachine: (ha-694782-m02)     <console type='pty'>
	I0415 23:55:47.313737   25488 main.go:141] libmachine: (ha-694782-m02)       <target type='serial' port='0'/>
	I0415 23:55:47.313749   25488 main.go:141] libmachine: (ha-694782-m02)     </console>
	I0415 23:55:47.313764   25488 main.go:141] libmachine: (ha-694782-m02)     <rng model='virtio'>
	I0415 23:55:47.313778   25488 main.go:141] libmachine: (ha-694782-m02)       <backend model='random'>/dev/random</backend>
	I0415 23:55:47.313787   25488 main.go:141] libmachine: (ha-694782-m02)     </rng>
	I0415 23:55:47.313797   25488 main.go:141] libmachine: (ha-694782-m02)     
	I0415 23:55:47.313807   25488 main.go:141] libmachine: (ha-694782-m02)     
	I0415 23:55:47.313816   25488 main.go:141] libmachine: (ha-694782-m02)   </devices>
	I0415 23:55:47.313827   25488 main.go:141] libmachine: (ha-694782-m02) </domain>
	I0415 23:55:47.313837   25488 main.go:141] libmachine: (ha-694782-m02) 
	I0415 23:55:47.320532   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:46:5c:22 in network default
	I0415 23:55:47.321104   25488 main.go:141] libmachine: (ha-694782-m02) Ensuring networks are active...
	I0415 23:55:47.321126   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:47.321879   25488 main.go:141] libmachine: (ha-694782-m02) Ensuring network default is active
	I0415 23:55:47.322191   25488 main.go:141] libmachine: (ha-694782-m02) Ensuring network mk-ha-694782 is active
	I0415 23:55:47.322531   25488 main.go:141] libmachine: (ha-694782-m02) Getting domain xml...
	I0415 23:55:47.323224   25488 main.go:141] libmachine: (ha-694782-m02) Creating domain...
	I0415 23:55:48.527079   25488 main.go:141] libmachine: (ha-694782-m02) Waiting to get IP...
	I0415 23:55:48.527975   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:48.528406   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:48.528454   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:48.528385   25867 retry.go:31] will retry after 193.593289ms: waiting for machine to come up
	I0415 23:55:48.723860   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:48.724293   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:48.724322   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:48.724246   25867 retry.go:31] will retry after 318.142991ms: waiting for machine to come up
	I0415 23:55:49.043718   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:49.044212   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:49.044246   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:49.044160   25867 retry.go:31] will retry after 317.519425ms: waiting for machine to come up
	I0415 23:55:49.363740   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:49.364162   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:49.364190   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:49.364128   25867 retry.go:31] will retry after 499.917098ms: waiting for machine to come up
	I0415 23:55:49.865951   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:49.866421   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:49.866457   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:49.866376   25867 retry.go:31] will retry after 528.145662ms: waiting for machine to come up
	I0415 23:55:50.397290   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:50.397725   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:50.397748   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:50.397678   25867 retry.go:31] will retry after 814.440825ms: waiting for machine to come up
	I0415 23:55:51.213197   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:51.213666   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:51.213699   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:51.213609   25867 retry.go:31] will retry after 1.179244943s: waiting for machine to come up
	I0415 23:55:52.394177   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:52.394631   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:52.394659   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:52.394599   25867 retry.go:31] will retry after 898.22342ms: waiting for machine to come up
	I0415 23:55:53.294395   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:53.294869   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:53.294886   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:53.294828   25867 retry.go:31] will retry after 1.437791451s: waiting for machine to come up
	I0415 23:55:54.734352   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:54.734808   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:54.734836   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:54.734768   25867 retry.go:31] will retry after 1.739624525s: waiting for machine to come up
	I0415 23:55:56.475588   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:56.475989   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:56.476012   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:56.475949   25867 retry.go:31] will retry after 2.659330494s: waiting for machine to come up
	I0415 23:55:59.137388   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:55:59.137822   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:55:59.137850   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:55:59.137783   25867 retry.go:31] will retry after 3.160909712s: waiting for machine to come up
	I0415 23:56:02.299883   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:02.300261   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:56:02.300290   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:56:02.300217   25867 retry.go:31] will retry after 4.421664688s: waiting for machine to come up
	I0415 23:56:06.726660   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:06.727082   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find current IP address of domain ha-694782-m02 in network mk-ha-694782
	I0415 23:56:06.727103   25488 main.go:141] libmachine: (ha-694782-m02) DBG | I0415 23:56:06.727039   25867 retry.go:31] will retry after 3.674569121s: waiting for machine to come up
	I0415 23:56:10.405303   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.405819   25488 main.go:141] libmachine: (ha-694782-m02) Found IP for machine: 192.168.39.42
	I0415 23:56:10.405840   25488 main.go:141] libmachine: (ha-694782-m02) Reserving static IP address...
	I0415 23:56:10.405852   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has current primary IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.406216   25488 main.go:141] libmachine: (ha-694782-m02) DBG | unable to find host DHCP lease matching {name: "ha-694782-m02", mac: "52:54:00:70:e2:c3", ip: "192.168.39.42"} in network mk-ha-694782
	I0415 23:56:10.475372   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Getting to WaitForSSH function...
	I0415 23:56:10.475398   25488 main.go:141] libmachine: (ha-694782-m02) Reserved static IP address: 192.168.39.42
	I0415 23:56:10.475417   25488 main.go:141] libmachine: (ha-694782-m02) Waiting for SSH to be available...
	I0415 23:56:10.477891   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.478298   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:minikube Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:10.478336   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.478415   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Using SSH client type: external
	I0415 23:56:10.478446   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa (-rw-------)
	I0415 23:56:10.478488   25488 main.go:141] libmachine: (ha-694782-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.42 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0415 23:56:10.478499   25488 main.go:141] libmachine: (ha-694782-m02) DBG | About to run SSH command:
	I0415 23:56:10.478509   25488 main.go:141] libmachine: (ha-694782-m02) DBG | exit 0
	I0415 23:56:10.609257   25488 main.go:141] libmachine: (ha-694782-m02) DBG | SSH cmd err, output: <nil>: 
	I0415 23:56:10.609562   25488 main.go:141] libmachine: (ha-694782-m02) KVM machine creation complete!
	I0415 23:56:10.609872   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetConfigRaw
	I0415 23:56:10.610356   25488 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0415 23:56:10.610558   25488 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0415 23:56:10.610818   25488 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0415 23:56:10.610838   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetState
	I0415 23:56:10.612022   25488 main.go:141] libmachine: Detecting operating system of created instance...
	I0415 23:56:10.612036   25488 main.go:141] libmachine: Waiting for SSH to be available...
	I0415 23:56:10.612041   25488 main.go:141] libmachine: Getting to WaitForSSH function...
	I0415 23:56:10.612046   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:10.614589   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.614941   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:10.614962   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.615077   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:10.615263   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:10.615431   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:10.615643   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:10.615800   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:56:10.616034   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I0415 23:56:10.616046   25488 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0415 23:56:10.728532   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 23:56:10.728560   25488 main.go:141] libmachine: Detecting the provisioner...
	I0415 23:56:10.728571   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:10.731209   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.731545   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:10.731572   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.731749   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:10.731917   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:10.732090   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:10.732218   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:10.732394   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:56:10.732556   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I0415 23:56:10.732567   25488 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0415 23:56:10.845527   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0415 23:56:10.845619   25488 main.go:141] libmachine: found compatible host: buildroot
	I0415 23:56:10.845630   25488 main.go:141] libmachine: Provisioning with buildroot...
	I0415 23:56:10.845639   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetMachineName
	I0415 23:56:10.845864   25488 buildroot.go:166] provisioning hostname "ha-694782-m02"
	I0415 23:56:10.845889   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetMachineName
	I0415 23:56:10.846065   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:10.848602   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.848973   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:10.848997   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.849171   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:10.849337   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:10.849524   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:10.849661   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:10.849812   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:56:10.849998   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I0415 23:56:10.850014   25488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-694782-m02 && echo "ha-694782-m02" | sudo tee /etc/hostname
	I0415 23:56:10.975678   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-694782-m02
	
	I0415 23:56:10.975708   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:10.978348   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.978637   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:10.978659   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:10.978867   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:10.979058   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:10.979231   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:10.979356   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:10.979495   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:56:10.979652   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I0415 23:56:10.979668   25488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-694782-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-694782-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-694782-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 23:56:11.102055   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 23:56:11.102095   25488 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0415 23:56:11.102122   25488 buildroot.go:174] setting up certificates
	I0415 23:56:11.102134   25488 provision.go:84] configureAuth start
	I0415 23:56:11.102154   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetMachineName
	I0415 23:56:11.102408   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetIP
	I0415 23:56:11.104527   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.104897   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.104926   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.105051   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:11.107090   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.107380   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.107410   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.107559   25488 provision.go:143] copyHostCerts
	I0415 23:56:11.107583   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0415 23:56:11.107620   25488 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0415 23:56:11.107632   25488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0415 23:56:11.107720   25488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0415 23:56:11.107800   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0415 23:56:11.107828   25488 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0415 23:56:11.107842   25488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0415 23:56:11.107871   25488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0415 23:56:11.107916   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0415 23:56:11.107932   25488 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0415 23:56:11.107938   25488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0415 23:56:11.107958   25488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0415 23:56:11.108003   25488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.ha-694782-m02 san=[127.0.0.1 192.168.39.42 ha-694782-m02 localhost minikube]
	I0415 23:56:11.232790   25488 provision.go:177] copyRemoteCerts
	I0415 23:56:11.232852   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 23:56:11.232878   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:11.235484   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.235814   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.235845   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.236089   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:11.236280   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:11.236442   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:11.236566   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa Username:docker}
	I0415 23:56:11.323731   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0415 23:56:11.323786   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 23:56:11.352534   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0415 23:56:11.352600   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0415 23:56:11.378051   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0415 23:56:11.378103   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 23:56:11.402829   25488 provision.go:87] duration metric: took 300.678289ms to configureAuth
	I0415 23:56:11.402859   25488 buildroot.go:189] setting minikube options for container-runtime
	I0415 23:56:11.403049   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:56:11.403116   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:11.405743   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.406136   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.406155   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.406414   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:11.406588   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:11.406756   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:11.406891   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:11.407043   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:56:11.407236   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I0415 23:56:11.407257   25488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0415 23:56:11.677645   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0415 23:56:11.677674   25488 main.go:141] libmachine: Checking connection to Docker...
	I0415 23:56:11.677684   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetURL
	I0415 23:56:11.678899   25488 main.go:141] libmachine: (ha-694782-m02) DBG | Using libvirt version 6000000
	I0415 23:56:11.681174   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.681528   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.681561   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.681718   25488 main.go:141] libmachine: Docker is up and running!
	I0415 23:56:11.681731   25488 main.go:141] libmachine: Reticulating splines...
	I0415 23:56:11.681738   25488 client.go:171] duration metric: took 24.716500263s to LocalClient.Create
	I0415 23:56:11.681758   25488 start.go:167] duration metric: took 24.716551938s to libmachine.API.Create "ha-694782"
	I0415 23:56:11.681770   25488 start.go:293] postStartSetup for "ha-694782-m02" (driver="kvm2")
	I0415 23:56:11.681783   25488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 23:56:11.681817   25488 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0415 23:56:11.682041   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 23:56:11.682063   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:11.684101   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.684399   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.684429   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.684525   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:11.684707   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:11.684885   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:11.685039   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa Username:docker}
	I0415 23:56:11.771783   25488 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 23:56:11.776091   25488 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 23:56:11.776115   25488 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0415 23:56:11.776185   25488 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0415 23:56:11.776252   25488 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0415 23:56:11.776262   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /etc/ssl/certs/148972.pem
	I0415 23:56:11.776340   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 23:56:11.785585   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0415 23:56:11.809617   25488 start.go:296] duration metric: took 127.83471ms for postStartSetup
	I0415 23:56:11.809670   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetConfigRaw
	I0415 23:56:11.810165   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetIP
	I0415 23:56:11.812618   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.813005   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.813033   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.813279   25488 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0415 23:56:11.813453   25488 start.go:128] duration metric: took 24.866488081s to createHost
	I0415 23:56:11.813475   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:11.815844   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.816169   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.816189   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.816311   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:11.816472   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:11.816606   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:11.816743   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:11.816901   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:56:11.817051   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I0415 23:56:11.817061   25488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 23:56:11.929852   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713225371.905156578
	
	I0415 23:56:11.929877   25488 fix.go:216] guest clock: 1713225371.905156578
	I0415 23:56:11.929884   25488 fix.go:229] Guest: 2024-04-15 23:56:11.905156578 +0000 UTC Remote: 2024-04-15 23:56:11.813463577 +0000 UTC m=+81.253097902 (delta=91.693001ms)
	I0415 23:56:11.929898   25488 fix.go:200] guest clock delta is within tolerance: 91.693001ms
	I0415 23:56:11.929904   25488 start.go:83] releasing machines lock for "ha-694782-m02", held for 24.983068056s
	I0415 23:56:11.929922   25488 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0415 23:56:11.930199   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetIP
	I0415 23:56:11.932528   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.932893   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.932923   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.935194   25488 out.go:177] * Found network options:
	I0415 23:56:11.936606   25488 out.go:177]   - NO_PROXY=192.168.39.41
	W0415 23:56:11.938073   25488 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 23:56:11.938113   25488 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0415 23:56:11.938600   25488 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0415 23:56:11.938786   25488 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0415 23:56:11.938874   25488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 23:56:11.938913   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	W0415 23:56:11.938979   25488 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 23:56:11.939050   25488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0415 23:56:11.939070   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0415 23:56:11.941656   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.941894   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.942069   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.942095   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.942208   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:11.942270   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:11.942308   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:11.942338   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:11.942401   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0415 23:56:11.942501   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:11.942527   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0415 23:56:11.942636   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa Username:docker}
	I0415 23:56:11.942653   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0415 23:56:11.942776   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa Username:docker}
	I0415 23:56:12.180601   25488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 23:56:12.186658   25488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 23:56:12.186723   25488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 23:56:12.202688   25488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 23:56:12.202712   25488 start.go:494] detecting cgroup driver to use...
	I0415 23:56:12.202777   25488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 23:56:12.218887   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 23:56:12.231989   25488 docker.go:217] disabling cri-docker service (if available) ...
	I0415 23:56:12.232046   25488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0415 23:56:12.244782   25488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0415 23:56:12.257890   25488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0415 23:56:12.369621   25488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0415 23:56:12.507488   25488 docker.go:233] disabling docker service ...
	I0415 23:56:12.507550   25488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0415 23:56:12.522595   25488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0415 23:56:12.535067   25488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0415 23:56:12.676201   25488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0415 23:56:12.791814   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0415 23:56:12.805759   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 23:56:12.823846   25488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0415 23:56:12.823906   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:56:12.833736   25488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0415 23:56:12.833789   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:56:12.843597   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:56:12.853281   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:56:12.863034   25488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 23:56:12.873083   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:56:12.883237   25488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:56:12.902220   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:56:12.912388   25488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 23:56:12.921104   25488 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0415 23:56:12.921140   25488 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0415 23:56:12.933837   25488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 23:56:12.942715   25488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:56:13.056576   25488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0415 23:56:13.200204   25488 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0415 23:56:13.200283   25488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0415 23:56:13.205172   25488 start.go:562] Will wait 60s for crictl version
	I0415 23:56:13.205245   25488 ssh_runner.go:195] Run: which crictl
	I0415 23:56:13.208916   25488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 23:56:13.244868   25488 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0415 23:56:13.244951   25488 ssh_runner.go:195] Run: crio --version
	I0415 23:56:13.273244   25488 ssh_runner.go:195] Run: crio --version
	I0415 23:56:13.303556   25488 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0415 23:56:13.304864   25488 out.go:177]   - env NO_PROXY=192.168.39.41
	I0415 23:56:13.305992   25488 main.go:141] libmachine: (ha-694782-m02) Calling .GetIP
	I0415 23:56:13.308329   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:13.308655   25488 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:56:01 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0415 23:56:13.308683   25488 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0415 23:56:13.308917   25488 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0415 23:56:13.312854   25488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 23:56:13.325299   25488 mustload.go:65] Loading cluster: ha-694782
	I0415 23:56:13.325510   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:56:13.325778   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:56:13.325811   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:56:13.339936   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32957
	I0415 23:56:13.340293   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:56:13.340727   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:56:13.340750   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:56:13.341110   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:56:13.341336   25488 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0415 23:56:13.342734   25488 host.go:66] Checking if "ha-694782" exists ...
	I0415 23:56:13.342992   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:56:13.343012   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:56:13.357709   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35187
	I0415 23:56:13.358056   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:56:13.358465   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:56:13.358489   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:56:13.358747   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:56:13.358941   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:56:13.359083   25488 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782 for IP: 192.168.39.42
	I0415 23:56:13.359096   25488 certs.go:194] generating shared ca certs ...
	I0415 23:56:13.359113   25488 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:56:13.359349   25488 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0415 23:56:13.359407   25488 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0415 23:56:13.359424   25488 certs.go:256] generating profile certs ...
	I0415 23:56:13.359515   25488 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.key
	I0415 23:56:13.359547   25488 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.eac2dea4
	I0415 23:56:13.359567   25488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.eac2dea4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.41 192.168.39.42 192.168.39.254]
	I0415 23:56:13.671903   25488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.eac2dea4 ...
	I0415 23:56:13.671935   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.eac2dea4: {Name:mkb8f3772d37649eb83259789cddf0c58e9658b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:56:13.672147   25488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.eac2dea4 ...
	I0415 23:56:13.672165   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.eac2dea4: {Name:mk7c945bc98ba7b6cb8f65afcf41b8988e1e2ddd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:56:13.672269   25488 certs.go:381] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.eac2dea4 -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt
	I0415 23:56:13.672433   25488 certs.go:385] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.eac2dea4 -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key
	I0415 23:56:13.672601   25488 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key
	I0415 23:56:13.672621   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 23:56:13.672638   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0415 23:56:13.672657   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 23:56:13.672675   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 23:56:13.672692   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0415 23:56:13.672706   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0415 23:56:13.672722   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0415 23:56:13.672741   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0415 23:56:13.672802   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0415 23:56:13.672838   25488 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0415 23:56:13.672851   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0415 23:56:13.672882   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0415 23:56:13.672911   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0415 23:56:13.672947   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0415 23:56:13.673000   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0415 23:56:13.673042   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:56:13.673065   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem -> /usr/share/ca-certificates/14897.pem
	I0415 23:56:13.673083   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /usr/share/ca-certificates/148972.pem
	I0415 23:56:13.673119   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:56:13.675829   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:56:13.676188   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:56:13.676204   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:56:13.676351   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:56:13.676547   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:56:13.676726   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:56:13.676855   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:56:13.753542   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0415 23:56:13.758063   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0415 23:56:13.770208   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0415 23:56:13.775034   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0415 23:56:13.786570   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0415 23:56:13.790842   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0415 23:56:13.803690   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0415 23:56:13.813487   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0415 23:56:13.826958   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0415 23:56:13.831593   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0415 23:56:13.844171   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0415 23:56:13.848526   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0415 23:56:13.859260   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 23:56:13.885184   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 23:56:13.910149   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 23:56:13.935698   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 23:56:13.960891   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0415 23:56:13.986032   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0415 23:56:14.010622   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 23:56:14.035423   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 23:56:14.059988   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 23:56:14.084544   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0415 23:56:14.109094   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0415 23:56:14.134203   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0415 23:56:14.150413   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0415 23:56:14.166922   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0415 23:56:14.183022   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0415 23:56:14.199437   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0415 23:56:14.215963   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0415 23:56:14.232699   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0415 23:56:14.249924   25488 ssh_runner.go:195] Run: openssl version
	I0415 23:56:14.255645   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 23:56:14.266461   25488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:56:14.270932   25488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:56:14.270975   25488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:56:14.276546   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 23:56:14.287148   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0415 23:56:14.297863   25488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0415 23:56:14.302551   25488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0415 23:56:14.302605   25488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0415 23:56:14.308677   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0415 23:56:14.319390   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0415 23:56:14.330840   25488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0415 23:56:14.335266   25488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0415 23:56:14.335315   25488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0415 23:56:14.340870   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 23:56:14.351201   25488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 23:56:14.355232   25488 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 23:56:14.355283   25488 kubeadm.go:928] updating node {m02 192.168.39.42 8443 v1.29.3 crio true true} ...
	I0415 23:56:14.355373   25488 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-694782-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.42
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 23:56:14.355397   25488 kube-vip.go:111] generating kube-vip config ...
	I0415 23:56:14.355424   25488 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0415 23:56:14.371160   25488 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0415 23:56:14.371222   25488 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0415 23:56:14.371312   25488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 23:56:14.381129   25488 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0415 23:56:14.381198   25488 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0415 23:56:14.390899   25488 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0415 23:56:14.390923   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0415 23:56:14.390977   25488 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubelet
	I0415 23:56:14.390997   25488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0415 23:56:14.390977   25488 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubeadm
	I0415 23:56:14.395687   25488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0415 23:56:14.395712   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0415 23:56:15.646651   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0415 23:56:15.646748   25488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0415 23:56:15.651802   25488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0415 23:56:15.651829   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0415 23:56:24.666254   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 23:56:24.680857   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0415 23:56:24.680952   25488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0415 23:56:24.685370   25488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0415 23:56:24.685394   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0415 23:56:25.131433   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0415 23:56:25.142455   25488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0415 23:56:25.160036   25488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 23:56:25.176926   25488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0415 23:56:25.193554   25488 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0415 23:56:25.197376   25488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 23:56:25.209283   25488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:56:25.323085   25488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 23:56:25.340836   25488 host.go:66] Checking if "ha-694782" exists ...
	I0415 23:56:25.341298   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:56:25.341336   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:56:25.355603   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42315
	I0415 23:56:25.356125   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:56:25.356572   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:56:25.356596   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:56:25.356889   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:56:25.357063   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:56:25.357198   25488 start.go:316] joinCluster: &{Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.42 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:56:25.357329   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0415 23:56:25.357348   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:56:25.360205   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:56:25.360569   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:56:25.360595   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:56:25.360771   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:56:25.360943   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:56:25.361112   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:56:25.361286   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:56:25.510038   25488 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.42 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0415 23:56:25.510089   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gw4duo.gci5l7kerx1vz1u3 --discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-694782-m02 --control-plane --apiserver-advertise-address=192.168.39.42 --apiserver-bind-port=8443"
	I0415 23:56:49.334916   25488 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gw4duo.gci5l7kerx1vz1u3 --discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-694782-m02 --control-plane --apiserver-advertise-address=192.168.39.42 --apiserver-bind-port=8443": (23.824798819s)
	I0415 23:56:49.334953   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0415 23:56:49.773817   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-694782-m02 minikube.k8s.io/updated_at=2024_04_15T23_56_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=ha-694782 minikube.k8s.io/primary=false
	I0415 23:56:49.900348   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-694782-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0415 23:56:50.025054   25488 start.go:318] duration metric: took 24.667851652s to joinCluster
	I0415 23:56:50.025180   25488 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.42 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0415 23:56:50.027041   25488 out.go:177] * Verifying Kubernetes components...
	I0415 23:56:50.025434   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:56:50.028608   25488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:56:50.240439   25488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 23:56:50.265586   25488 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0415 23:56:50.265924   25488 kapi.go:59] client config for ha-694782: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.crt", KeyFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.key", CAFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0415 23:56:50.266006   25488 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.41:8443
	I0415 23:56:50.266237   25488 node_ready.go:35] waiting up to 6m0s for node "ha-694782-m02" to be "Ready" ...
	I0415 23:56:50.266313   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:50.266321   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:50.266336   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:50.266342   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:50.275188   25488 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0415 23:56:50.766607   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:50.766627   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:50.766638   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:50.766646   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:50.769889   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:51.266758   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:51.266785   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:51.266798   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:51.266804   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:51.270007   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:51.767172   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:51.767193   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:51.767199   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:51.767203   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:51.770728   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:52.267388   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:52.267407   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:52.267414   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:52.267419   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:52.404598   25488 round_trippers.go:574] Response Status: 200 OK in 137 milliseconds
	I0415 23:56:52.405480   25488 node_ready.go:53] node "ha-694782-m02" has status "Ready":"False"
	I0415 23:56:52.766528   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:52.766547   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:52.766556   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:52.766561   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:52.770145   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:53.266434   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:53.266455   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:53.266462   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:53.266466   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:53.269830   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:53.766617   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:53.766643   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:53.766653   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:53.766660   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:53.770833   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:56:54.267159   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:54.267184   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:54.267196   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:54.267202   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:54.270814   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:54.766701   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:54.766723   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:54.766731   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:54.766734   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:54.770173   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:54.771010   25488 node_ready.go:53] node "ha-694782-m02" has status "Ready":"False"
	I0415 23:56:55.267469   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:55.267494   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.267503   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.267508   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.274024   25488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 23:56:55.767460   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:55.767481   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.767489   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.767494   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.771175   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:55.772261   25488 node_ready.go:49] node "ha-694782-m02" has status "Ready":"True"
	I0415 23:56:55.772284   25488 node_ready.go:38] duration metric: took 5.506020993s for node "ha-694782-m02" to be "Ready" ...
	I0415 23:56:55.772295   25488 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 23:56:55.772371   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:56:55.772379   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.772392   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.772398   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.776979   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:56:55.784234   25488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4sgv4" in "kube-system" namespace to be "Ready" ...
	I0415 23:56:55.784296   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-4sgv4
	I0415 23:56:55.784304   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.784311   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.784315   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.787768   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:55.788385   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:56:55.788404   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.788411   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.788414   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.791258   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:56:55.792423   25488 pod_ready.go:92] pod "coredns-76f75df574-4sgv4" in "kube-system" namespace has status "Ready":"True"
	I0415 23:56:55.792438   25488 pod_ready.go:81] duration metric: took 8.183667ms for pod "coredns-76f75df574-4sgv4" in "kube-system" namespace to be "Ready" ...
	I0415 23:56:55.792445   25488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-zdc8q" in "kube-system" namespace to be "Ready" ...
	I0415 23:56:55.792482   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-zdc8q
	I0415 23:56:55.792490   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.792496   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.792501   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.796007   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:55.797171   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:56:55.797188   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.797198   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.797203   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.799496   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:56:55.800318   25488 pod_ready.go:92] pod "coredns-76f75df574-zdc8q" in "kube-system" namespace has status "Ready":"True"
	I0415 23:56:55.800332   25488 pod_ready.go:81] duration metric: took 7.88168ms for pod "coredns-76f75df574-zdc8q" in "kube-system" namespace to be "Ready" ...
	I0415 23:56:55.800339   25488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:56:55.800377   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782
	I0415 23:56:55.800385   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.800396   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.800402   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.805420   25488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 23:56:55.806408   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:56:55.806425   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.806433   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.806440   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.808909   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:56:55.810095   25488 pod_ready.go:92] pod "etcd-ha-694782" in "kube-system" namespace has status "Ready":"True"
	I0415 23:56:55.810112   25488 pod_ready.go:81] duration metric: took 9.767749ms for pod "etcd-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:56:55.810120   25488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:56:55.810156   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:56:55.810164   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.810170   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.810173   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.813832   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:55.815146   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:55.815160   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:55.815167   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:55.815170   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:55.817272   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:56:56.310486   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:56:56.310514   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:56.310524   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:56.310527   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:56.315237   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:56:56.316129   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:56.316145   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:56.316154   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:56.316158   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:56.319007   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:56:56.810976   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:56:56.810999   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:56.811007   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:56.811010   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:56.814948   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:56.815535   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:56.815550   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:56.815558   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:56.815562   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:56.818582   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:57.311301   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:56:57.311320   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:57.311327   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:57.311331   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:57.315014   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:57.315753   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:57.315771   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:57.315781   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:57.315787   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:57.318516   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:56:57.810461   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:56:57.810483   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:57.810492   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:57.810496   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:57.814346   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:57.815290   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:57.815304   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:57.815311   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:57.815315   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:57.818258   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:56:57.818866   25488 pod_ready.go:102] pod "etcd-ha-694782-m02" in "kube-system" namespace has status "Ready":"False"
	I0415 23:56:58.310239   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:56:58.310280   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:58.310294   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:58.310299   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:58.313871   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:58.314647   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:58.314661   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:58.314669   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:58.314672   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:58.317213   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:56:58.810971   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:56:58.810993   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:58.811001   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:58.811006   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:58.815019   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:56:58.815983   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:58.815997   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:58.816004   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:58.816007   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:58.819401   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:59.310354   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:56:59.310382   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:59.310390   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:59.310394   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:59.313960   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:59.314991   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:59.315005   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:59.315012   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:59.315016   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:59.317744   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:56:59.810723   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:56:59.810745   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:59.810752   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:59.810757   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:59.814166   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:56:59.815227   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:56:59.815243   25488 round_trippers.go:469] Request Headers:
	I0415 23:56:59.815251   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:56:59.815255   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:56:59.817956   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:00.311274   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:57:00.311300   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:00.311306   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:00.311315   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:00.315073   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:00.316117   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:00.316133   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:00.316140   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:00.316145   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:00.319647   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:00.320258   25488 pod_ready.go:92] pod "etcd-ha-694782-m02" in "kube-system" namespace has status "Ready":"True"
	I0415 23:57:00.320281   25488 pod_ready.go:81] duration metric: took 4.510154612s for pod "etcd-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:00.320296   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:00.320340   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782
	I0415 23:57:00.320349   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:00.320355   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:00.320358   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:00.324311   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:00.325123   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:57:00.325137   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:00.325144   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:00.325148   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:00.327966   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:00.329201   25488 pod_ready.go:92] pod "kube-apiserver-ha-694782" in "kube-system" namespace has status "Ready":"True"
	I0415 23:57:00.329220   25488 pod_ready.go:81] duration metric: took 8.917684ms for pod "kube-apiserver-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:00.329228   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:00.329287   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:57:00.329294   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:00.329301   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:00.329307   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:00.332295   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:00.333015   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:00.333033   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:00.333043   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:00.333047   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:00.335204   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:00.829381   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:57:00.829404   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:00.829414   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:00.829421   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:00.832994   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:00.833835   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:00.833849   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:00.833856   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:00.833860   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:00.836706   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:01.329499   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:57:01.329527   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:01.329537   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:01.329545   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:01.334044   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:57:01.334877   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:01.334890   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:01.334896   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:01.334900   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:01.340333   25488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 23:57:01.830312   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:57:01.830338   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:01.830351   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:01.830356   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:01.833865   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:01.834669   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:01.834685   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:01.834696   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:01.834701   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:01.837410   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:02.330058   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:57:02.330077   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:02.330085   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:02.330090   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:02.334946   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:57:02.335738   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:02.335754   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:02.335761   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:02.335765   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:02.338366   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:02.338908   25488 pod_ready.go:102] pod "kube-apiserver-ha-694782-m02" in "kube-system" namespace has status "Ready":"False"
	I0415 23:57:02.830355   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:57:02.830374   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:02.830383   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:02.830392   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:02.834192   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:02.835310   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:02.835335   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:02.835347   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:02.835352   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:02.838434   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:03.330373   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:57:03.330395   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:03.330405   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:03.330410   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:03.333645   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:03.334629   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:03.334646   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:03.334652   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:03.334657   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:03.337037   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:03.830060   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:57:03.830083   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:03.830091   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:03.830097   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:03.833490   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:03.834457   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:03.834470   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:03.834478   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:03.834481   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:03.836956   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:04.330399   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:57:04.330419   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.330426   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.330429   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.333977   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:04.334648   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:04.334661   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.334669   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.334675   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.337276   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:04.338013   25488 pod_ready.go:92] pod "kube-apiserver-ha-694782-m02" in "kube-system" namespace has status "Ready":"True"
	I0415 23:57:04.338033   25488 pod_ready.go:81] duration metric: took 4.008796828s for pod "kube-apiserver-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:04.338042   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:04.338106   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-694782
	I0415 23:57:04.338119   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.338130   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.338135   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.340481   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:04.341175   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:57:04.341189   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.341198   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.341202   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.343938   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:04.344574   25488 pod_ready.go:92] pod "kube-controller-manager-ha-694782" in "kube-system" namespace has status "Ready":"True"
	I0415 23:57:04.344592   25488 pod_ready.go:81] duration metric: took 6.545072ms for pod "kube-controller-manager-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:04.344603   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:04.344660   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-694782-m02
	I0415 23:57:04.344671   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.344678   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.344682   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.347285   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:04.348047   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:04.348061   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.348067   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.348072   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.350324   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:04.350742   25488 pod_ready.go:92] pod "kube-controller-manager-ha-694782-m02" in "kube-system" namespace has status "Ready":"True"
	I0415 23:57:04.350770   25488 pod_ready.go:81] duration metric: took 6.15785ms for pod "kube-controller-manager-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:04.350782   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d46v5" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:04.368041   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d46v5
	I0415 23:57:04.368053   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.368065   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.368086   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.370682   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:04.567535   25488 request.go:629] Waited for 196.288946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:57:04.567588   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:57:04.567595   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.567610   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.567616   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.571876   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:57:04.572308   25488 pod_ready.go:92] pod "kube-proxy-d46v5" in "kube-system" namespace has status "Ready":"True"
	I0415 23:57:04.572327   25488 pod_ready.go:81] duration metric: took 221.53309ms for pod "kube-proxy-d46v5" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:04.572339   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vbfhn" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:04.767697   25488 request.go:629] Waited for 195.299186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbfhn
	I0415 23:57:04.767760   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbfhn
	I0415 23:57:04.767775   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.767785   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.767790   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.771226   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:04.967957   25488 request.go:629] Waited for 196.1342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:04.968006   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:04.968019   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:04.968037   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:04.968044   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:04.970570   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:57:04.971464   25488 pod_ready.go:92] pod "kube-proxy-vbfhn" in "kube-system" namespace has status "Ready":"True"
	I0415 23:57:04.971485   25488 pod_ready.go:81] duration metric: took 399.134854ms for pod "kube-proxy-vbfhn" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:04.971499   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:05.167564   25488 request.go:629] Waited for 195.977611ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782
	I0415 23:57:05.167612   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782
	I0415 23:57:05.167617   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:05.167624   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:05.167627   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:05.171033   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:05.368180   25488 request.go:629] Waited for 196.342051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:57:05.368258   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:57:05.368269   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:05.368279   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:05.368288   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:05.372149   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:05.372759   25488 pod_ready.go:92] pod "kube-scheduler-ha-694782" in "kube-system" namespace has status "Ready":"True"
	I0415 23:57:05.372778   25488 pod_ready.go:81] duration metric: took 401.26753ms for pod "kube-scheduler-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:05.372790   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:05.567828   25488 request.go:629] Waited for 194.975559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782-m02
	I0415 23:57:05.567877   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782-m02
	I0415 23:57:05.567881   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:05.567893   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:05.567897   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:05.571055   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:05.768261   25488 request.go:629] Waited for 196.578908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:05.768307   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:57:05.768312   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:05.768319   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:05.768324   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:05.771650   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:05.772382   25488 pod_ready.go:92] pod "kube-scheduler-ha-694782-m02" in "kube-system" namespace has status "Ready":"True"
	I0415 23:57:05.772401   25488 pod_ready.go:81] duration metric: took 399.603988ms for pod "kube-scheduler-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:57:05.772412   25488 pod_ready.go:38] duration metric: took 10.000087746s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 23:57:05.772431   25488 api_server.go:52] waiting for apiserver process to appear ...
	I0415 23:57:05.772492   25488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 23:57:05.787625   25488 api_server.go:72] duration metric: took 15.762408082s to wait for apiserver process to appear ...
	I0415 23:57:05.787650   25488 api_server.go:88] waiting for apiserver healthz status ...
	I0415 23:57:05.787669   25488 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I0415 23:57:05.793609   25488 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I0415 23:57:05.793713   25488 round_trippers.go:463] GET https://192.168.39.41:8443/version
	I0415 23:57:05.793724   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:05.793731   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:05.793736   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:05.794617   25488 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0415 23:57:05.794788   25488 api_server.go:141] control plane version: v1.29.3
	I0415 23:57:05.794807   25488 api_server.go:131] duration metric: took 7.151331ms to wait for apiserver health ...
	I0415 23:57:05.794814   25488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0415 23:57:05.968207   25488 request.go:629] Waited for 173.329742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:57:05.968283   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:57:05.968301   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:05.968331   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:05.968338   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:05.973417   25488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 23:57:05.978385   25488 system_pods.go:59] 17 kube-system pods found
	I0415 23:57:05.978406   25488 system_pods.go:61] "coredns-76f75df574-4sgv4" [3c1f65c0-37b2-4c88-879b-68297e989d44] Running
	I0415 23:57:05.978412   25488 system_pods.go:61] "coredns-76f75df574-zdc8q" [6a7e1a29-8c75-4d1f-978b-471ac0adb888] Running
	I0415 23:57:05.978416   25488 system_pods.go:61] "etcd-ha-694782" [ca5444f7-8fe5-4165-a01b-9c9adba4ede0] Running
	I0415 23:57:05.978419   25488 system_pods.go:61] "etcd-ha-694782-m02" [821ace46-8aac-46ae-9e3f-7bc144bb46a9] Running
	I0415 23:57:05.978422   25488 system_pods.go:61] "kindnet-99cs7" [5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3] Running
	I0415 23:57:05.978426   25488 system_pods.go:61] "kindnet-qvp8b" [04002e18-2673-4067-a10e-64f40e3c60c8] Running
	I0415 23:57:05.978429   25488 system_pods.go:61] "kube-apiserver-ha-694782" [42680d27-9926-4b99-ae33-61a37afe0207] Running
	I0415 23:57:05.978432   25488 system_pods.go:61] "kube-apiserver-ha-694782-m02" [5db36efa-244b-47e0-ba6f-93826468c168] Running
	I0415 23:57:05.978435   25488 system_pods.go:61] "kube-controller-manager-ha-694782" [1832df1f-ac45-427c-93fc-04630558d7d1] Running
	I0415 23:57:05.978439   25488 system_pods.go:61] "kube-controller-manager-ha-694782-m02" [923c744c-e27c-468d-a14f-2a1de579df73] Running
	I0415 23:57:05.978443   25488 system_pods.go:61] "kube-proxy-d46v5" [c92235e6-1639-45c0-a92b-bf0cc32bea22] Running
	I0415 23:57:05.978446   25488 system_pods.go:61] "kube-proxy-vbfhn" [131197dd-aa5b-48c7-a0e8-d1772432b28c] Running
	I0415 23:57:05.978449   25488 system_pods.go:61] "kube-scheduler-ha-694782" [8e2ff44e-34ef-4cb6-9734-62004de985b8] Running
	I0415 23:57:05.978451   25488 system_pods.go:61] "kube-scheduler-ha-694782-m02" [e2452893-9792-41e9-9d9e-e2f66bc07303] Running
	I0415 23:57:05.978454   25488 system_pods.go:61] "kube-vip-ha-694782" [a8ffb1b9-f55e-4efe-b9a1-7e58a341a2f0] Running
	I0415 23:57:05.978457   25488 system_pods.go:61] "kube-vip-ha-694782-m02" [036ef70f-0af1-42a5-b0bb-5622785ff031] Running
	I0415 23:57:05.978459   25488 system_pods.go:61] "storage-provisioner" [bea9c166-5f83-473f-8f01-335ea1436dad] Running
	I0415 23:57:05.978464   25488 system_pods.go:74] duration metric: took 183.645542ms to wait for pod list to return data ...
	I0415 23:57:05.978474   25488 default_sa.go:34] waiting for default service account to be created ...
	I0415 23:57:06.167877   25488 request.go:629] Waited for 189.327065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/default/serviceaccounts
	I0415 23:57:06.167926   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/default/serviceaccounts
	I0415 23:57:06.167931   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:06.167939   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:06.167943   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:06.170989   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:06.171177   25488 default_sa.go:45] found service account: "default"
	I0415 23:57:06.171191   25488 default_sa.go:55] duration metric: took 192.709876ms for default service account to be created ...
	I0415 23:57:06.171198   25488 system_pods.go:116] waiting for k8s-apps to be running ...
	I0415 23:57:06.367530   25488 request.go:629] Waited for 196.280884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:57:06.367580   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:57:06.367585   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:06.367599   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:06.367616   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:06.372651   25488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 23:57:06.378400   25488 system_pods.go:86] 17 kube-system pods found
	I0415 23:57:06.378425   25488 system_pods.go:89] "coredns-76f75df574-4sgv4" [3c1f65c0-37b2-4c88-879b-68297e989d44] Running
	I0415 23:57:06.378432   25488 system_pods.go:89] "coredns-76f75df574-zdc8q" [6a7e1a29-8c75-4d1f-978b-471ac0adb888] Running
	I0415 23:57:06.378439   25488 system_pods.go:89] "etcd-ha-694782" [ca5444f7-8fe5-4165-a01b-9c9adba4ede0] Running
	I0415 23:57:06.378444   25488 system_pods.go:89] "etcd-ha-694782-m02" [821ace46-8aac-46ae-9e3f-7bc144bb46a9] Running
	I0415 23:57:06.378452   25488 system_pods.go:89] "kindnet-99cs7" [5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3] Running
	I0415 23:57:06.378461   25488 system_pods.go:89] "kindnet-qvp8b" [04002e18-2673-4067-a10e-64f40e3c60c8] Running
	I0415 23:57:06.378468   25488 system_pods.go:89] "kube-apiserver-ha-694782" [42680d27-9926-4b99-ae33-61a37afe0207] Running
	I0415 23:57:06.378478   25488 system_pods.go:89] "kube-apiserver-ha-694782-m02" [5db36efa-244b-47e0-ba6f-93826468c168] Running
	I0415 23:57:06.378485   25488 system_pods.go:89] "kube-controller-manager-ha-694782" [1832df1f-ac45-427c-93fc-04630558d7d1] Running
	I0415 23:57:06.378492   25488 system_pods.go:89] "kube-controller-manager-ha-694782-m02" [923c744c-e27c-468d-a14f-2a1de579df73] Running
	I0415 23:57:06.378500   25488 system_pods.go:89] "kube-proxy-d46v5" [c92235e6-1639-45c0-a92b-bf0cc32bea22] Running
	I0415 23:57:06.378507   25488 system_pods.go:89] "kube-proxy-vbfhn" [131197dd-aa5b-48c7-a0e8-d1772432b28c] Running
	I0415 23:57:06.378514   25488 system_pods.go:89] "kube-scheduler-ha-694782" [8e2ff44e-34ef-4cb6-9734-62004de985b8] Running
	I0415 23:57:06.378520   25488 system_pods.go:89] "kube-scheduler-ha-694782-m02" [e2452893-9792-41e9-9d9e-e2f66bc07303] Running
	I0415 23:57:06.378526   25488 system_pods.go:89] "kube-vip-ha-694782" [a8ffb1b9-f55e-4efe-b9a1-7e58a341a2f0] Running
	I0415 23:57:06.378534   25488 system_pods.go:89] "kube-vip-ha-694782-m02" [036ef70f-0af1-42a5-b0bb-5622785ff031] Running
	I0415 23:57:06.378541   25488 system_pods.go:89] "storage-provisioner" [bea9c166-5f83-473f-8f01-335ea1436dad] Running
	I0415 23:57:06.378554   25488 system_pods.go:126] duration metric: took 207.346934ms to wait for k8s-apps to be running ...
	I0415 23:57:06.378564   25488 system_svc.go:44] waiting for kubelet service to be running ....
	I0415 23:57:06.378618   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 23:57:06.394541   25488 system_svc.go:56] duration metric: took 15.973291ms WaitForService to wait for kubelet
	I0415 23:57:06.394563   25488 kubeadm.go:576] duration metric: took 16.369347744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 23:57:06.394586   25488 node_conditions.go:102] verifying NodePressure condition ...
	I0415 23:57:06.567901   25488 request.go:629] Waited for 173.249172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes
	I0415 23:57:06.567977   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes
	I0415 23:57:06.567985   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:06.567992   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:06.567998   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:06.571207   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:57:06.571919   25488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 23:57:06.571939   25488 node_conditions.go:123] node cpu capacity is 2
	I0415 23:57:06.571949   25488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 23:57:06.571953   25488 node_conditions.go:123] node cpu capacity is 2
	I0415 23:57:06.571957   25488 node_conditions.go:105] duration metric: took 177.367096ms to run NodePressure ...
	I0415 23:57:06.571968   25488 start.go:240] waiting for startup goroutines ...
	I0415 23:57:06.571991   25488 start.go:254] writing updated cluster config ...
	I0415 23:57:06.574233   25488 out.go:177] 
	I0415 23:57:06.575616   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:57:06.575696   25488 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0415 23:57:06.577368   25488 out.go:177] * Starting "ha-694782-m03" control-plane node in "ha-694782" cluster
	I0415 23:57:06.578457   25488 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0415 23:57:06.578483   25488 cache.go:56] Caching tarball of preloaded images
	I0415 23:57:06.578590   25488 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0415 23:57:06.578605   25488 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0415 23:57:06.578720   25488 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0415 23:57:06.578917   25488 start.go:360] acquireMachinesLock for ha-694782-m03: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0415 23:57:06.578973   25488 start.go:364] duration metric: took 34.46µs to acquireMachinesLock for "ha-694782-m03"
	I0415 23:57:06.578998   25488 start.go:93] Provisioning new machine with config: &{Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.42 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0415 23:57:06.579129   25488 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0415 23:57:06.580670   25488 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0415 23:57:06.580762   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:57:06.580804   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:57:06.594970   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37621
	I0415 23:57:06.595365   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:57:06.595804   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:57:06.595841   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:57:06.596124   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:57:06.596310   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetMachineName
	I0415 23:57:06.596444   25488 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0415 23:57:06.596627   25488 start.go:159] libmachine.API.Create for "ha-694782" (driver="kvm2")
	I0415 23:57:06.596654   25488 client.go:168] LocalClient.Create starting
	I0415 23:57:06.596683   25488 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem
	I0415 23:57:06.596711   25488 main.go:141] libmachine: Decoding PEM data...
	I0415 23:57:06.596725   25488 main.go:141] libmachine: Parsing certificate...
	I0415 23:57:06.596805   25488 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem
	I0415 23:57:06.596849   25488 main.go:141] libmachine: Decoding PEM data...
	I0415 23:57:06.596866   25488 main.go:141] libmachine: Parsing certificate...
	I0415 23:57:06.596891   25488 main.go:141] libmachine: Running pre-create checks...
	I0415 23:57:06.596903   25488 main.go:141] libmachine: (ha-694782-m03) Calling .PreCreateCheck
	I0415 23:57:06.597061   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetConfigRaw
	I0415 23:57:06.597465   25488 main.go:141] libmachine: Creating machine...
	I0415 23:57:06.597478   25488 main.go:141] libmachine: (ha-694782-m03) Calling .Create
	I0415 23:57:06.597645   25488 main.go:141] libmachine: (ha-694782-m03) Creating KVM machine...
	I0415 23:57:06.598864   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found existing default KVM network
	I0415 23:57:06.598982   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found existing private KVM network mk-ha-694782
	I0415 23:57:06.599098   25488 main.go:141] libmachine: (ha-694782-m03) Setting up store path in /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03 ...
	I0415 23:57:06.599125   25488 main.go:141] libmachine: (ha-694782-m03) Building disk image from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso
	I0415 23:57:06.599175   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:06.599061   26272 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:57:06.599225   25488 main.go:141] libmachine: (ha-694782-m03) Downloading /home/jenkins/minikube-integration/18647-7542/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0415 23:57:06.807841   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:06.807723   26272 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa...
	I0415 23:57:06.939686   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:06.939574   26272 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/ha-694782-m03.rawdisk...
	I0415 23:57:06.939723   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Writing magic tar header
	I0415 23:57:06.939734   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Writing SSH key tar header
	I0415 23:57:06.939743   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:06.939679   26272 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03 ...
	I0415 23:57:06.939787   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03
	I0415 23:57:06.939812   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines
	I0415 23:57:06.939834   25488 main.go:141] libmachine: (ha-694782-m03) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03 (perms=drwx------)
	I0415 23:57:06.939847   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:57:06.939859   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542
	I0415 23:57:06.939867   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0415 23:57:06.939876   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Checking permissions on dir: /home/jenkins
	I0415 23:57:06.939888   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Checking permissions on dir: /home
	I0415 23:57:06.939903   25488 main.go:141] libmachine: (ha-694782-m03) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines (perms=drwxr-xr-x)
	I0415 23:57:06.939911   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Skipping /home - not owner
	I0415 23:57:06.939925   25488 main.go:141] libmachine: (ha-694782-m03) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube (perms=drwxr-xr-x)
	I0415 23:57:06.939943   25488 main.go:141] libmachine: (ha-694782-m03) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542 (perms=drwxrwxr-x)
	I0415 23:57:06.939960   25488 main.go:141] libmachine: (ha-694782-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0415 23:57:06.939975   25488 main.go:141] libmachine: (ha-694782-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0415 23:57:06.939987   25488 main.go:141] libmachine: (ha-694782-m03) Creating domain...
	I0415 23:57:06.940842   25488 main.go:141] libmachine: (ha-694782-m03) define libvirt domain using xml: 
	I0415 23:57:06.940863   25488 main.go:141] libmachine: (ha-694782-m03) <domain type='kvm'>
	I0415 23:57:06.940872   25488 main.go:141] libmachine: (ha-694782-m03)   <name>ha-694782-m03</name>
	I0415 23:57:06.940880   25488 main.go:141] libmachine: (ha-694782-m03)   <memory unit='MiB'>2200</memory>
	I0415 23:57:06.940928   25488 main.go:141] libmachine: (ha-694782-m03)   <vcpu>2</vcpu>
	I0415 23:57:06.940954   25488 main.go:141] libmachine: (ha-694782-m03)   <features>
	I0415 23:57:06.940974   25488 main.go:141] libmachine: (ha-694782-m03)     <acpi/>
	I0415 23:57:06.940993   25488 main.go:141] libmachine: (ha-694782-m03)     <apic/>
	I0415 23:57:06.941006   25488 main.go:141] libmachine: (ha-694782-m03)     <pae/>
	I0415 23:57:06.941013   25488 main.go:141] libmachine: (ha-694782-m03)     
	I0415 23:57:06.941022   25488 main.go:141] libmachine: (ha-694782-m03)   </features>
	I0415 23:57:06.941027   25488 main.go:141] libmachine: (ha-694782-m03)   <cpu mode='host-passthrough'>
	I0415 23:57:06.941035   25488 main.go:141] libmachine: (ha-694782-m03)   
	I0415 23:57:06.941041   25488 main.go:141] libmachine: (ha-694782-m03)   </cpu>
	I0415 23:57:06.941050   25488 main.go:141] libmachine: (ha-694782-m03)   <os>
	I0415 23:57:06.941061   25488 main.go:141] libmachine: (ha-694782-m03)     <type>hvm</type>
	I0415 23:57:06.941077   25488 main.go:141] libmachine: (ha-694782-m03)     <boot dev='cdrom'/>
	I0415 23:57:06.941093   25488 main.go:141] libmachine: (ha-694782-m03)     <boot dev='hd'/>
	I0415 23:57:06.941100   25488 main.go:141] libmachine: (ha-694782-m03)     <bootmenu enable='no'/>
	I0415 23:57:06.941106   25488 main.go:141] libmachine: (ha-694782-m03)   </os>
	I0415 23:57:06.941112   25488 main.go:141] libmachine: (ha-694782-m03)   <devices>
	I0415 23:57:06.941120   25488 main.go:141] libmachine: (ha-694782-m03)     <disk type='file' device='cdrom'>
	I0415 23:57:06.941129   25488 main.go:141] libmachine: (ha-694782-m03)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/boot2docker.iso'/>
	I0415 23:57:06.941138   25488 main.go:141] libmachine: (ha-694782-m03)       <target dev='hdc' bus='scsi'/>
	I0415 23:57:06.941143   25488 main.go:141] libmachine: (ha-694782-m03)       <readonly/>
	I0415 23:57:06.941149   25488 main.go:141] libmachine: (ha-694782-m03)     </disk>
	I0415 23:57:06.941171   25488 main.go:141] libmachine: (ha-694782-m03)     <disk type='file' device='disk'>
	I0415 23:57:06.941188   25488 main.go:141] libmachine: (ha-694782-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0415 23:57:06.941202   25488 main.go:141] libmachine: (ha-694782-m03)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/ha-694782-m03.rawdisk'/>
	I0415 23:57:06.941217   25488 main.go:141] libmachine: (ha-694782-m03)       <target dev='hda' bus='virtio'/>
	I0415 23:57:06.941226   25488 main.go:141] libmachine: (ha-694782-m03)     </disk>
	I0415 23:57:06.941231   25488 main.go:141] libmachine: (ha-694782-m03)     <interface type='network'>
	I0415 23:57:06.941239   25488 main.go:141] libmachine: (ha-694782-m03)       <source network='mk-ha-694782'/>
	I0415 23:57:06.941244   25488 main.go:141] libmachine: (ha-694782-m03)       <model type='virtio'/>
	I0415 23:57:06.941252   25488 main.go:141] libmachine: (ha-694782-m03)     </interface>
	I0415 23:57:06.941260   25488 main.go:141] libmachine: (ha-694782-m03)     <interface type='network'>
	I0415 23:57:06.941273   25488 main.go:141] libmachine: (ha-694782-m03)       <source network='default'/>
	I0415 23:57:06.941284   25488 main.go:141] libmachine: (ha-694782-m03)       <model type='virtio'/>
	I0415 23:57:06.941299   25488 main.go:141] libmachine: (ha-694782-m03)     </interface>
	I0415 23:57:06.941318   25488 main.go:141] libmachine: (ha-694782-m03)     <serial type='pty'>
	I0415 23:57:06.941331   25488 main.go:141] libmachine: (ha-694782-m03)       <target port='0'/>
	I0415 23:57:06.941345   25488 main.go:141] libmachine: (ha-694782-m03)     </serial>
	I0415 23:57:06.941361   25488 main.go:141] libmachine: (ha-694782-m03)     <console type='pty'>
	I0415 23:57:06.941374   25488 main.go:141] libmachine: (ha-694782-m03)       <target type='serial' port='0'/>
	I0415 23:57:06.941400   25488 main.go:141] libmachine: (ha-694782-m03)     </console>
	I0415 23:57:06.941421   25488 main.go:141] libmachine: (ha-694782-m03)     <rng model='virtio'>
	I0415 23:57:06.941436   25488 main.go:141] libmachine: (ha-694782-m03)       <backend model='random'>/dev/random</backend>
	I0415 23:57:06.941448   25488 main.go:141] libmachine: (ha-694782-m03)     </rng>
	I0415 23:57:06.941461   25488 main.go:141] libmachine: (ha-694782-m03)     
	I0415 23:57:06.941472   25488 main.go:141] libmachine: (ha-694782-m03)     
	I0415 23:57:06.941481   25488 main.go:141] libmachine: (ha-694782-m03)   </devices>
	I0415 23:57:06.941492   25488 main.go:141] libmachine: (ha-694782-m03) </domain>
	I0415 23:57:06.941503   25488 main.go:141] libmachine: (ha-694782-m03) 
	I0415 23:57:06.947763   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:80:00:d1 in network default
	I0415 23:57:06.948312   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:06.948352   25488 main.go:141] libmachine: (ha-694782-m03) Ensuring networks are active...
	I0415 23:57:06.949029   25488 main.go:141] libmachine: (ha-694782-m03) Ensuring network default is active
	I0415 23:57:06.949430   25488 main.go:141] libmachine: (ha-694782-m03) Ensuring network mk-ha-694782 is active
	I0415 23:57:06.949932   25488 main.go:141] libmachine: (ha-694782-m03) Getting domain xml...
	I0415 23:57:06.950780   25488 main.go:141] libmachine: (ha-694782-m03) Creating domain...
	I0415 23:57:08.146089   25488 main.go:141] libmachine: (ha-694782-m03) Waiting to get IP...
	I0415 23:57:08.146865   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:08.147249   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:08.147298   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:08.147223   26272 retry.go:31] will retry after 195.294878ms: waiting for machine to come up
	I0415 23:57:08.344769   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:08.345348   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:08.345379   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:08.345290   26272 retry.go:31] will retry after 281.825029ms: waiting for machine to come up
	I0415 23:57:08.628634   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:08.629005   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:08.629037   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:08.628953   26272 retry.go:31] will retry after 306.772461ms: waiting for machine to come up
	I0415 23:57:08.937440   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:08.937911   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:08.937939   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:08.937869   26272 retry.go:31] will retry after 407.267476ms: waiting for machine to come up
	I0415 23:57:09.346382   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:09.346839   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:09.346935   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:09.346785   26272 retry.go:31] will retry after 748.889119ms: waiting for machine to come up
	I0415 23:57:10.097393   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:10.097864   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:10.097894   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:10.097802   26272 retry.go:31] will retry after 801.012058ms: waiting for machine to come up
	I0415 23:57:10.900916   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:10.901326   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:10.901890   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:10.901296   26272 retry.go:31] will retry after 1.005790352s: waiting for machine to come up
	I0415 23:57:11.909288   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:11.909764   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:11.909783   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:11.909716   26272 retry.go:31] will retry after 1.299462671s: waiting for machine to come up
	I0415 23:57:13.210322   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:13.210812   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:13.210842   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:13.210767   26272 retry.go:31] will retry after 1.14091487s: waiting for machine to come up
	I0415 23:57:14.352805   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:14.353277   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:14.353312   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:14.353248   26272 retry.go:31] will retry after 1.449833548s: waiting for machine to come up
	I0415 23:57:15.805237   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:15.805651   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:15.805690   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:15.805615   26272 retry.go:31] will retry after 2.394178992s: waiting for machine to come up
	I0415 23:57:18.202221   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:18.202526   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:18.202552   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:18.202490   26272 retry.go:31] will retry after 2.938714927s: waiting for machine to come up
	I0415 23:57:21.144413   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:21.144796   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:21.144822   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:21.144764   26272 retry.go:31] will retry after 3.228906937s: waiting for machine to come up
	I0415 23:57:24.374842   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:24.375220   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find current IP address of domain ha-694782-m03 in network mk-ha-694782
	I0415 23:57:24.375251   25488 main.go:141] libmachine: (ha-694782-m03) DBG | I0415 23:57:24.375182   26272 retry.go:31] will retry after 3.573523595s: waiting for machine to come up
	I0415 23:57:27.950696   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:27.951198   25488 main.go:141] libmachine: (ha-694782-m03) Found IP for machine: 192.168.39.202
	I0415 23:57:27.951230   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has current primary IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:27.951239   25488 main.go:141] libmachine: (ha-694782-m03) Reserving static IP address...
	I0415 23:57:27.951582   25488 main.go:141] libmachine: (ha-694782-m03) DBG | unable to find host DHCP lease matching {name: "ha-694782-m03", mac: "52:54:00:fc:a7:e5", ip: "192.168.39.202"} in network mk-ha-694782
	I0415 23:57:28.021023   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Getting to WaitForSSH function...
	I0415 23:57:28.021056   25488 main.go:141] libmachine: (ha-694782-m03) Reserved static IP address: 192.168.39.202
	I0415 23:57:28.021069   25488 main.go:141] libmachine: (ha-694782-m03) Waiting for SSH to be available...
	I0415 23:57:28.023528   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.023940   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:28.023972   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.024133   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Using SSH client type: external
	I0415 23:57:28.024161   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa (-rw-------)
	I0415 23:57:28.024196   25488 main.go:141] libmachine: (ha-694782-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.202 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0415 23:57:28.024229   25488 main.go:141] libmachine: (ha-694782-m03) DBG | About to run SSH command:
	I0415 23:57:28.024247   25488 main.go:141] libmachine: (ha-694782-m03) DBG | exit 0
	I0415 23:57:28.149532   25488 main.go:141] libmachine: (ha-694782-m03) DBG | SSH cmd err, output: <nil>: 
	I0415 23:57:28.149836   25488 main.go:141] libmachine: (ha-694782-m03) KVM machine creation complete!
	I0415 23:57:28.150280   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetConfigRaw
	I0415 23:57:28.150866   25488 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0415 23:57:28.151102   25488 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0415 23:57:28.151298   25488 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0415 23:57:28.151330   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetState
	I0415 23:57:28.152508   25488 main.go:141] libmachine: Detecting operating system of created instance...
	I0415 23:57:28.152525   25488 main.go:141] libmachine: Waiting for SSH to be available...
	I0415 23:57:28.152532   25488 main.go:141] libmachine: Getting to WaitForSSH function...
	I0415 23:57:28.152540   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:28.155001   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.155403   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:28.155432   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.155565   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:28.155742   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:28.155930   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:28.156070   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:28.156225   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:57:28.156427   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0415 23:57:28.156438   25488 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0415 23:57:28.256333   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 23:57:28.256357   25488 main.go:141] libmachine: Detecting the provisioner...
	I0415 23:57:28.256365   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:28.259091   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.259442   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:28.259468   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.259614   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:28.259771   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:28.259944   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:28.260060   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:28.260196   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:57:28.260385   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0415 23:57:28.260418   25488 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0415 23:57:28.361739   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0415 23:57:28.361805   25488 main.go:141] libmachine: found compatible host: buildroot
	I0415 23:57:28.361820   25488 main.go:141] libmachine: Provisioning with buildroot...
	I0415 23:57:28.361834   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetMachineName
	I0415 23:57:28.362022   25488 buildroot.go:166] provisioning hostname "ha-694782-m03"
	I0415 23:57:28.362050   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetMachineName
	I0415 23:57:28.362227   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:28.364854   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.365242   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:28.365271   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.365425   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:28.365572   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:28.365709   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:28.365840   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:28.365973   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:57:28.366115   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0415 23:57:28.366126   25488 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-694782-m03 && echo "ha-694782-m03" | sudo tee /etc/hostname
	I0415 23:57:28.483531   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-694782-m03
	
	I0415 23:57:28.483559   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:28.486200   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.486555   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:28.486605   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.486800   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:28.487006   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:28.487233   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:28.487396   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:28.487599   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:57:28.487806   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0415 23:57:28.487831   25488 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-694782-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-694782-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-694782-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0415 23:57:28.602817   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0415 23:57:28.602850   25488 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0415 23:57:28.602865   25488 buildroot.go:174] setting up certificates
	I0415 23:57:28.602872   25488 provision.go:84] configureAuth start
	I0415 23:57:28.602880   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetMachineName
	I0415 23:57:28.603201   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetIP
	I0415 23:57:28.605867   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.606193   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:28.606218   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.606349   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:28.608370   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.608653   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:28.608674   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.608833   25488 provision.go:143] copyHostCerts
	I0415 23:57:28.608859   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0415 23:57:28.608895   25488 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0415 23:57:28.608903   25488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0415 23:57:28.608963   25488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0415 23:57:28.609033   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0415 23:57:28.609049   25488 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0415 23:57:28.609056   25488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0415 23:57:28.609078   25488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0415 23:57:28.609117   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0415 23:57:28.609132   25488 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0415 23:57:28.609138   25488 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0415 23:57:28.609177   25488 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0415 23:57:28.609240   25488 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.ha-694782-m03 san=[127.0.0.1 192.168.39.202 ha-694782-m03 localhost minikube]
	I0415 23:57:28.872793   25488 provision.go:177] copyRemoteCerts
	I0415 23:57:28.872843   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0415 23:57:28.872864   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:28.875572   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.875995   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:28.876025   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:28.876251   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:28.876446   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:28.876604   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:28.876774   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa Username:docker}
	I0415 23:57:28.960633   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0415 23:57:28.960705   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0415 23:57:28.985245   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0415 23:57:28.985300   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0415 23:57:29.009855   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0415 23:57:29.009923   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0415 23:57:29.035029   25488 provision.go:87] duration metric: took 432.148559ms to configureAuth
	I0415 23:57:29.035052   25488 buildroot.go:189] setting minikube options for container-runtime
	I0415 23:57:29.035269   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:57:29.035345   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:29.038305   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.038750   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:29.038779   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.038995   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:29.039168   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:29.039344   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:29.039529   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:29.039672   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:57:29.039837   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0415 23:57:29.039851   25488 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0415 23:57:29.309640   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0415 23:57:29.309674   25488 main.go:141] libmachine: Checking connection to Docker...
	I0415 23:57:29.309691   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetURL
	I0415 23:57:29.311113   25488 main.go:141] libmachine: (ha-694782-m03) DBG | Using libvirt version 6000000
	I0415 23:57:29.313355   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.313714   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:29.313735   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.313950   25488 main.go:141] libmachine: Docker is up and running!
	I0415 23:57:29.313966   25488 main.go:141] libmachine: Reticulating splines...
	I0415 23:57:29.313974   25488 client.go:171] duration metric: took 22.717310883s to LocalClient.Create
	I0415 23:57:29.314003   25488 start.go:167] duration metric: took 22.717376374s to libmachine.API.Create "ha-694782"
	I0415 23:57:29.314015   25488 start.go:293] postStartSetup for "ha-694782-m03" (driver="kvm2")
	I0415 23:57:29.314078   25488 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0415 23:57:29.314110   25488 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0415 23:57:29.314353   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0415 23:57:29.314374   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:29.316416   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.316723   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:29.316744   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.316946   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:29.317102   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:29.317271   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:29.317407   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa Username:docker}
	I0415 23:57:29.402248   25488 ssh_runner.go:195] Run: cat /etc/os-release
	I0415 23:57:29.406836   25488 info.go:137] Remote host: Buildroot 2023.02.9
	I0415 23:57:29.406857   25488 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0415 23:57:29.406919   25488 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0415 23:57:29.406984   25488 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0415 23:57:29.406993   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /etc/ssl/certs/148972.pem
	I0415 23:57:29.407068   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0415 23:57:29.418115   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0415 23:57:29.444270   25488 start.go:296] duration metric: took 130.19682ms for postStartSetup
	I0415 23:57:29.444335   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetConfigRaw
	I0415 23:57:29.444968   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetIP
	I0415 23:57:29.447458   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.447868   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:29.447903   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.448153   25488 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0415 23:57:29.448381   25488 start.go:128] duration metric: took 22.869241647s to createHost
	I0415 23:57:29.448403   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:29.450452   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.450762   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:29.450782   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.450949   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:29.451117   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:29.451290   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:29.451426   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:29.451593   25488 main.go:141] libmachine: Using SSH client type: native
	I0415 23:57:29.451741   25488 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.202 22 <nil> <nil>}
	I0415 23:57:29.451753   25488 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0415 23:57:29.554079   25488 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713225449.533957299
	
	I0415 23:57:29.554100   25488 fix.go:216] guest clock: 1713225449.533957299
	I0415 23:57:29.554109   25488 fix.go:229] Guest: 2024-04-15 23:57:29.533957299 +0000 UTC Remote: 2024-04-15 23:57:29.448393913 +0000 UTC m=+158.888028227 (delta=85.563386ms)
	I0415 23:57:29.554126   25488 fix.go:200] guest clock delta is within tolerance: 85.563386ms
	I0415 23:57:29.554132   25488 start.go:83] releasing machines lock for "ha-694782-m03", held for 22.975147828s
	I0415 23:57:29.554154   25488 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0415 23:57:29.554388   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetIP
	I0415 23:57:29.556642   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.557028   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:29.557058   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.559465   25488 out.go:177] * Found network options:
	I0415 23:57:29.560951   25488 out.go:177]   - NO_PROXY=192.168.39.41,192.168.39.42
	W0415 23:57:29.562166   25488 proxy.go:119] fail to check proxy env: Error ip not in block
	W0415 23:57:29.562201   25488 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 23:57:29.562217   25488 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0415 23:57:29.562677   25488 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0415 23:57:29.562864   25488 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0415 23:57:29.562967   25488 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0415 23:57:29.563005   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	W0415 23:57:29.563039   25488 proxy.go:119] fail to check proxy env: Error ip not in block
	W0415 23:57:29.563062   25488 proxy.go:119] fail to check proxy env: Error ip not in block
	I0415 23:57:29.563131   25488 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0415 23:57:29.563153   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0415 23:57:29.565514   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.565763   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.565936   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:29.565962   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.566088   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:29.566222   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:29.566248   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:29.566252   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:29.566409   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:29.566468   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0415 23:57:29.566551   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa Username:docker}
	I0415 23:57:29.566647   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0415 23:57:29.566772   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0415 23:57:29.566924   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa Username:docker}
	I0415 23:57:29.799727   25488 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0415 23:57:29.806086   25488 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0415 23:57:29.806138   25488 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0415 23:57:29.825428   25488 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0415 23:57:29.825450   25488 start.go:494] detecting cgroup driver to use...
	I0415 23:57:29.825518   25488 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0415 23:57:29.844091   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0415 23:57:29.860695   25488 docker.go:217] disabling cri-docker service (if available) ...
	I0415 23:57:29.860751   25488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0415 23:57:29.875728   25488 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0415 23:57:29.889960   25488 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0415 23:57:30.014990   25488 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0415 23:57:30.165809   25488 docker.go:233] disabling docker service ...
	I0415 23:57:30.165883   25488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0415 23:57:30.181877   25488 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0415 23:57:30.197880   25488 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0415 23:57:30.343229   25488 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0415 23:57:30.471525   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0415 23:57:30.486105   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0415 23:57:30.505857   25488 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0415 23:57:30.505926   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:57:30.516326   25488 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0415 23:57:30.516369   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:57:30.527547   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:57:30.538826   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:57:30.549726   25488 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0415 23:57:30.561383   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:57:30.572164   25488 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:57:30.590351   25488 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0415 23:57:30.601575   25488 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0415 23:57:30.613199   25488 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0415 23:57:30.613265   25488 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0415 23:57:30.628406   25488 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0415 23:57:30.638978   25488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:57:30.750989   25488 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0415 23:57:30.897712   25488 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0415 23:57:30.897780   25488 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0415 23:57:30.902369   25488 start.go:562] Will wait 60s for crictl version
	I0415 23:57:30.902412   25488 ssh_runner.go:195] Run: which crictl
	I0415 23:57:30.906038   25488 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0415 23:57:30.942946   25488 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0415 23:57:30.943006   25488 ssh_runner.go:195] Run: crio --version
	I0415 23:57:30.972511   25488 ssh_runner.go:195] Run: crio --version
	I0415 23:57:31.002583   25488 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0415 23:57:31.003890   25488 out.go:177]   - env NO_PROXY=192.168.39.41
	I0415 23:57:31.005018   25488 out.go:177]   - env NO_PROXY=192.168.39.41,192.168.39.42
	I0415 23:57:31.006091   25488 main.go:141] libmachine: (ha-694782-m03) Calling .GetIP
	I0415 23:57:31.008781   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:31.009173   25488 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0415 23:57:31.009194   25488 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0415 23:57:31.009440   25488 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0415 23:57:31.013685   25488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 23:57:31.027856   25488 mustload.go:65] Loading cluster: ha-694782
	I0415 23:57:31.028098   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:57:31.028338   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:57:31.028370   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:57:31.043709   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39309
	I0415 23:57:31.044095   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:57:31.044546   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:57:31.044570   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:57:31.044870   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:57:31.045053   25488 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0415 23:57:31.046503   25488 host.go:66] Checking if "ha-694782" exists ...
	I0415 23:57:31.046809   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:57:31.046846   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:57:31.061411   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37687
	I0415 23:57:31.061758   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:57:31.062109   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:57:31.062131   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:57:31.062456   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:57:31.062626   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:57:31.062773   25488 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782 for IP: 192.168.39.202
	I0415 23:57:31.062786   25488 certs.go:194] generating shared ca certs ...
	I0415 23:57:31.062800   25488 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:57:31.062905   25488 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0415 23:57:31.062944   25488 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0415 23:57:31.062953   25488 certs.go:256] generating profile certs ...
	I0415 23:57:31.063022   25488 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.key
	I0415 23:57:31.063056   25488 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.9202ceb3
	I0415 23:57:31.063071   25488 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.9202ceb3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.41 192.168.39.42 192.168.39.202 192.168.39.254]
	I0415 23:57:31.304099   25488 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.9202ceb3 ...
	I0415 23:57:31.304128   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.9202ceb3: {Name:mk5d93d5502ef9674a3a4ff2b2b025bc5f57c78a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:57:31.304287   25488 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.9202ceb3 ...
	I0415 23:57:31.304300   25488 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.9202ceb3: {Name:mk6251073914dc8969df401bc5afd5ce24c8c412 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:57:31.304366   25488 certs.go:381] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.9202ceb3 -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt
	I0415 23:57:31.304482   25488 certs.go:385] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.9202ceb3 -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key
	I0415 23:57:31.304596   25488 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key
	I0415 23:57:31.304611   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0415 23:57:31.304622   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0415 23:57:31.304636   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0415 23:57:31.304648   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0415 23:57:31.304660   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0415 23:57:31.304670   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0415 23:57:31.304680   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0415 23:57:31.304689   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0415 23:57:31.304729   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0415 23:57:31.304758   25488 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0415 23:57:31.304769   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0415 23:57:31.304792   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0415 23:57:31.304813   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0415 23:57:31.304834   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0415 23:57:31.304868   25488 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0415 23:57:31.304893   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:57:31.304906   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem -> /usr/share/ca-certificates/14897.pem
	I0415 23:57:31.304920   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /usr/share/ca-certificates/148972.pem
	I0415 23:57:31.304949   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:57:31.308077   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:57:31.308484   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:57:31.308508   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:57:31.308670   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:57:31.308937   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:57:31.309073   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:57:31.309254   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:57:31.389417   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0415 23:57:31.395013   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0415 23:57:31.407739   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0415 23:57:31.412369   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0415 23:57:31.424670   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0415 23:57:31.434247   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0415 23:57:31.452257   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0415 23:57:31.457507   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0415 23:57:31.469375   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0415 23:57:31.474123   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0415 23:57:31.485208   25488 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0415 23:57:31.494735   25488 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0415 23:57:31.507421   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0415 23:57:31.533866   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0415 23:57:31.558206   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0415 23:57:31.581053   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0415 23:57:31.605680   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0415 23:57:31.630618   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0415 23:57:31.655170   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0415 23:57:31.679377   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0415 23:57:31.703003   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0415 23:57:31.728280   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0415 23:57:31.752270   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0415 23:57:31.776104   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0415 23:57:31.792843   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0415 23:57:31.810183   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0415 23:57:31.827552   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0415 23:57:31.845300   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0415 23:57:31.862110   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0415 23:57:31.879497   25488 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (758 bytes)
	I0415 23:57:31.899791   25488 ssh_runner.go:195] Run: openssl version
	I0415 23:57:31.906073   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0415 23:57:31.918899   25488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0415 23:57:31.923589   25488 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0415 23:57:31.923638   25488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0415 23:57:31.929638   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0415 23:57:31.941146   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0415 23:57:31.951985   25488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:57:31.956780   25488 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:57:31.956834   25488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0415 23:57:31.962575   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0415 23:57:31.974207   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0415 23:57:31.987019   25488 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0415 23:57:31.991740   25488 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0415 23:57:31.991783   25488 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0415 23:57:31.998013   25488 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0415 23:57:32.009793   25488 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0415 23:57:32.014183   25488 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0415 23:57:32.014229   25488 kubeadm.go:928] updating node {m03 192.168.39.202 8443 v1.29.3 crio true true} ...
	I0415 23:57:32.014309   25488 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-694782-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0415 23:57:32.014333   25488 kube-vip.go:111] generating kube-vip config ...
	I0415 23:57:32.014394   25488 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0415 23:57:32.031987   25488 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0415 23:57:32.032055   25488 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0415 23:57:32.032107   25488 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0415 23:57:32.042291   25488 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.29.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.29.3': No such file or directory
	
	Initiating transfer...
	I0415 23:57:32.042338   25488 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.29.3
	I0415 23:57:32.051851   25488 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubelet.sha256
	I0415 23:57:32.051901   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 23:57:32.051852   25488 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubectl.sha256
	I0415 23:57:32.051974   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubectl -> /var/lib/minikube/binaries/v1.29.3/kubectl
	I0415 23:57:32.051853   25488 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/linux/amd64/kubeadm.sha256
	I0415 23:57:32.051998   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubeadm -> /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0415 23:57:32.052053   25488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl
	I0415 23:57:32.052075   25488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm
	I0415 23:57:32.066673   25488 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubelet -> /var/lib/minikube/binaries/v1.29.3/kubelet
	I0415 23:57:32.066736   25488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubectl': No such file or directory
	I0415 23:57:32.066755   25488 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet
	I0415 23:57:32.066762   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubectl --> /var/lib/minikube/binaries/v1.29.3/kubectl (49799168 bytes)
	I0415 23:57:32.066776   25488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubeadm': No such file or directory
	I0415 23:57:32.066795   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubeadm --> /var/lib/minikube/binaries/v1.29.3/kubeadm (48340992 bytes)
	I0415 23:57:32.079834   25488 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.29.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.29.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.29.3/kubelet': No such file or directory
	I0415 23:57:32.079867   25488 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.29.3/kubelet --> /var/lib/minikube/binaries/v1.29.3/kubelet (111919104 bytes)
	I0415 23:57:33.032463   25488 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0415 23:57:33.043671   25488 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0415 23:57:33.061642   25488 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0415 23:57:33.078893   25488 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0415 23:57:33.096120   25488 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0415 23:57:33.100090   25488 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0415 23:57:33.112577   25488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:57:33.247188   25488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 23:57:33.263863   25488 host.go:66] Checking if "ha-694782" exists ...
	I0415 23:57:33.264226   25488 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:57:33.264289   25488 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:57:33.279497   25488 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41213
	I0415 23:57:33.279968   25488 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:57:33.280440   25488 main.go:141] libmachine: Using API Version  1
	I0415 23:57:33.280468   25488 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:57:33.280810   25488 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:57:33.281028   25488 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0415 23:57:33.281205   25488 start.go:316] joinCluster: &{Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cluster
Name:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.42 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:57:33.281342   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0415 23:57:33.281366   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0415 23:57:33.284394   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:57:33.284817   25488 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0415 23:57:33.284847   25488 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0415 23:57:33.284935   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0415 23:57:33.285116   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0415 23:57:33.285279   25488 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0415 23:57:33.285444   25488 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0415 23:57:33.458741   25488 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0415 23:57:33.458789   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9d0d7k.0di6w9ehac36jvk9 --discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-694782-m03 --control-plane --apiserver-advertise-address=192.168.39.202 --apiserver-bind-port=8443"
	I0415 23:57:58.852667   25488 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9d0d7k.0di6w9ehac36jvk9 --discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-694782-m03 --control-plane --apiserver-advertise-address=192.168.39.202 --apiserver-bind-port=8443": (25.393834117s)
	I0415 23:57:58.852715   25488 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0415 23:57:59.273153   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-694782-m03 minikube.k8s.io/updated_at=2024_04_15T23_57_59_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=ha-694782 minikube.k8s.io/primary=false
	I0415 23:57:59.399817   25488 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-694782-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0415 23:57:59.532722   25488 start.go:318] duration metric: took 26.251512438s to joinCluster
	I0415 23:57:59.532809   25488 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0415 23:57:59.534248   25488 out.go:177] * Verifying Kubernetes components...
	I0415 23:57:59.533191   25488 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:57:59.535610   25488 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0415 23:57:59.779110   25488 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0415 23:57:59.806384   25488 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0415 23:57:59.806729   25488 kapi.go:59] client config for ha-694782: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.crt", KeyFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.key", CAFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0415 23:57:59.806809   25488 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.41:8443
	I0415 23:57:59.807136   25488 node_ready.go:35] waiting up to 6m0s for node "ha-694782-m03" to be "Ready" ...
	I0415 23:57:59.807232   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:57:59.807245   25488 round_trippers.go:469] Request Headers:
	I0415 23:57:59.807255   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:57:59.807260   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:57:59.811378   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:00.307538   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:00.307565   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:00.307577   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:00.307583   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:00.311353   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:00.807659   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:00.807687   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:00.807697   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:00.807702   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:00.811898   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:01.307747   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:01.307786   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:01.307808   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:01.307814   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:01.311831   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:01.808032   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:01.808055   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:01.808074   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:01.808079   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:01.811940   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:01.812513   25488 node_ready.go:53] node "ha-694782-m03" has status "Ready":"False"
	I0415 23:58:02.308451   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:02.308503   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:02.308521   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:02.308531   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:02.312471   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:02.807595   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:02.807623   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:02.807635   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:02.807642   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:02.811097   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:03.307326   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:03.307343   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:03.307351   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:03.307355   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:03.313013   25488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 23:58:03.807460   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:03.807488   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:03.807499   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:03.807507   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:03.816561   25488 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0415 23:58:03.817615   25488 node_ready.go:53] node "ha-694782-m03" has status "Ready":"False"
	I0415 23:58:04.308153   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:04.308181   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.308193   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.308200   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.312664   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:04.807980   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:04.808001   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.808008   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.808012   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.811494   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:04.814230   25488 node_ready.go:49] node "ha-694782-m03" has status "Ready":"True"
	I0415 23:58:04.814260   25488 node_ready.go:38] duration metric: took 5.007094145s for node "ha-694782-m03" to be "Ready" ...
	I0415 23:58:04.814268   25488 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 23:58:04.814316   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:58:04.814324   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.814331   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.814338   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.820988   25488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 23:58:04.828368   25488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-4sgv4" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:04.828444   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-4sgv4
	I0415 23:58:04.828455   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.828465   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.828472   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.831410   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:04.832107   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:04.832120   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.832127   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.832132   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.834561   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:04.835134   25488 pod_ready.go:92] pod "coredns-76f75df574-4sgv4" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:04.835156   25488 pod_ready.go:81] duration metric: took 6.766914ms for pod "coredns-76f75df574-4sgv4" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:04.835167   25488 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-zdc8q" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:04.835226   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-zdc8q
	I0415 23:58:04.835237   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.835247   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.835257   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.837959   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:04.838602   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:04.838621   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.838632   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.838639   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.841400   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:04.841779   25488 pod_ready.go:92] pod "coredns-76f75df574-zdc8q" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:04.841793   25488 pod_ready.go:81] duration metric: took 6.61489ms for pod "coredns-76f75df574-zdc8q" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:04.841800   25488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:04.841856   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782
	I0415 23:58:04.841872   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.841881   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.841886   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.844909   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:04.845376   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:04.845394   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.845400   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.845404   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.848960   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:04.849416   25488 pod_ready.go:92] pod "etcd-ha-694782" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:04.849435   25488 pod_ready.go:81] duration metric: took 7.629414ms for pod "etcd-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:04.849446   25488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:04.849509   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m02
	I0415 23:58:04.849519   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.849533   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.849542   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.852335   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:04.853344   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:58:04.853360   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:04.853369   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:04.853374   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:04.855966   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:04.856466   25488 pod_ready.go:92] pod "etcd-ha-694782-m02" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:04.856480   25488 pod_ready.go:81] duration metric: took 7.024362ms for pod "etcd-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:04.856487   25488 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-694782-m03" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:05.008910   25488 request.go:629] Waited for 152.326528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:05.008964   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:05.008970   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:05.008980   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:05.008986   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:05.012691   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:05.208923   25488 request.go:629] Waited for 195.388645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:05.208980   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:05.208985   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:05.208993   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:05.209001   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:05.212927   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:05.408693   25488 request.go:629] Waited for 51.682685ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:05.408752   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:05.408756   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:05.408763   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:05.408767   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:05.412168   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:05.608004   25488 request.go:629] Waited for 195.084833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:05.608453   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:05.608468   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:05.608484   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:05.608493   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:05.612135   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:05.857539   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:05.857558   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:05.857564   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:05.857569   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:05.860698   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:06.008046   25488 request.go:629] Waited for 146.154076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:06.008116   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:06.008123   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:06.008132   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:06.008138   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:06.011706   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:06.356794   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:06.356828   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:06.356839   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:06.356849   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:06.360307   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:06.408258   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:06.408280   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:06.408290   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:06.408297   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:06.412079   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:06.857333   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:06.857363   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:06.857371   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:06.857374   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:06.861184   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:06.862122   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:06.862138   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:06.862145   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:06.862149   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:06.864699   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:06.865460   25488 pod_ready.go:102] pod "etcd-ha-694782-m03" in "kube-system" namespace has status "Ready":"False"
	I0415 23:58:07.357287   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:07.357307   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:07.357316   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:07.357319   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:07.360418   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:07.361273   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:07.361289   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:07.361297   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:07.361301   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:07.363756   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:07.856730   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:07.856749   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:07.856757   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:07.856762   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:07.860157   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:07.861088   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:07.861106   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:07.861116   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:07.861123   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:07.863692   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:08.356931   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:08.356951   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:08.356959   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:08.356963   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:08.360395   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:08.361249   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:08.361264   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:08.361271   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:08.361275   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:08.364172   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:08.857564   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:08.857587   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:08.857595   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:08.857599   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:08.860797   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:08.861802   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:08.861816   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:08.861822   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:08.861825   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:08.864659   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:09.356709   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:09.356737   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:09.356747   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:09.356754   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:09.360029   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:09.360986   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:09.361000   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:09.361009   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:09.361016   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:09.364149   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:09.364728   25488 pod_ready.go:102] pod "etcd-ha-694782-m03" in "kube-system" namespace has status "Ready":"False"
	I0415 23:58:09.857096   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:09.857124   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:09.857132   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:09.857136   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:09.860639   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:09.861518   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:09.861533   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:09.861540   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:09.861544   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:09.864810   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:10.357501   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:10.357527   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:10.357538   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:10.357546   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:10.362709   25488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 23:58:10.363605   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:10.363623   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:10.363630   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:10.363634   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:10.366431   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:10.857472   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:10.857495   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:10.857506   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:10.857512   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:10.860802   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:10.861768   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:10.861784   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:10.861791   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:10.861795   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:10.865015   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:11.357068   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:11.357092   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:11.357103   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:11.357110   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:11.362218   25488 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0415 23:58:11.363953   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:11.363969   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:11.363979   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:11.363983   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:11.366728   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:11.367333   25488 pod_ready.go:102] pod "etcd-ha-694782-m03" in "kube-system" namespace has status "Ready":"False"
	I0415 23:58:11.857128   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:11.857148   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:11.857177   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:11.857182   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:11.861447   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:11.862545   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:11.862568   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:11.862579   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:11.862585   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:11.868678   25488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 23:58:12.357180   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:12.357203   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:12.357214   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:12.357223   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:12.360930   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:12.361969   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:12.361988   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:12.361996   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:12.362002   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:12.366440   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:12.856785   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:12.856812   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:12.856821   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:12.856824   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:12.861311   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:12.862288   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:12.862304   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:12.862312   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:12.862316   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:12.865415   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:13.356956   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:13.356989   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.357015   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.357019   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.360924   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:13.361964   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:13.361980   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.361987   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.361990   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.364986   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:13.857583   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/etcd-ha-694782-m03
	I0415 23:58:13.857607   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.857616   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.857620   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.861613   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:13.862793   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:13.862808   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.862815   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.862821   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.870453   25488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 23:58:13.871146   25488 pod_ready.go:92] pod "etcd-ha-694782-m03" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:13.871170   25488 pod_ready.go:81] duration metric: took 9.014674778s for pod "etcd-ha-694782-m03" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:13.871193   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:13.871256   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782
	I0415 23:58:13.871266   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.871278   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.871288   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.875453   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:13.876310   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:13.876328   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.876338   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.876342   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.882640   25488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 23:58:13.883207   25488 pod_ready.go:92] pod "kube-apiserver-ha-694782" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:13.883227   25488 pod_ready.go:81] duration metric: took 12.024417ms for pod "kube-apiserver-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:13.883241   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:13.883318   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m02
	I0415 23:58:13.883327   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.883337   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.883341   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.886414   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:13.887078   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:58:13.887096   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.887104   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.887110   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.890590   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:13.891684   25488 pod_ready.go:92] pod "kube-apiserver-ha-694782-m02" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:13.891710   25488 pod_ready.go:81] duration metric: took 8.453893ms for pod "kube-apiserver-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:13.891730   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-694782-m03" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:13.891797   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-694782-m03
	I0415 23:58:13.891809   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.891818   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.891824   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.896748   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:13.897402   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:13.897418   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.897426   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.897431   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.900299   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:13.900711   25488 pod_ready.go:92] pod "kube-apiserver-ha-694782-m03" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:13.900730   25488 pod_ready.go:81] duration metric: took 8.992398ms for pod "kube-apiserver-ha-694782-m03" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:13.900743   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:13.900795   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-694782
	I0415 23:58:13.900805   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:13.900815   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:13.900821   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:13.903736   25488 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0415 23:58:14.008731   25488 request.go:629] Waited for 104.30565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:14.008808   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:14.008816   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:14.008832   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:14.008846   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:14.015570   25488 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0415 23:58:14.016940   25488 pod_ready.go:92] pod "kube-controller-manager-ha-694782" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:14.016963   25488 pod_ready.go:81] duration metric: took 116.211401ms for pod "kube-controller-manager-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:14.016976   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:14.208414   25488 request.go:629] Waited for 191.362007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-694782-m02
	I0415 23:58:14.208468   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-694782-m02
	I0415 23:58:14.208473   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:14.208480   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:14.208485   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:14.212332   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:14.408142   25488 request.go:629] Waited for 195.052046ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:58:14.408209   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:58:14.408214   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:14.408221   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:14.408225   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:14.412165   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:14.412870   25488 pod_ready.go:92] pod "kube-controller-manager-ha-694782-m02" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:14.412887   25488 pod_ready.go:81] duration metric: took 395.903963ms for pod "kube-controller-manager-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:14.412896   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-694782-m03" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:14.609053   25488 request.go:629] Waited for 196.088761ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-694782-m03
	I0415 23:58:14.609129   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-694782-m03
	I0415 23:58:14.609135   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:14.609143   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:14.609148   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:14.613237   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:14.808485   25488 request.go:629] Waited for 194.371577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:14.808555   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:14.808560   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:14.808567   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:14.808571   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:14.812404   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:14.812916   25488 pod_ready.go:92] pod "kube-controller-manager-ha-694782-m03" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:14.812938   25488 pod_ready.go:81] duration metric: took 400.033295ms for pod "kube-controller-manager-ha-694782-m03" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:14.812950   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-45tb9" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:15.008039   25488 request.go:629] Waited for 195.01746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-proxy-45tb9
	I0415 23:58:15.008127   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-proxy-45tb9
	I0415 23:58:15.008133   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:15.008145   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:15.008155   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:15.011680   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:15.208709   25488 request.go:629] Waited for 196.355673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:15.208773   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:15.208782   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:15.208792   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:15.208808   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:15.211907   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:15.212667   25488 pod_ready.go:92] pod "kube-proxy-45tb9" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:15.212683   25488 pod_ready.go:81] duration metric: took 399.725981ms for pod "kube-proxy-45tb9" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:15.212692   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d46v5" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:15.408191   25488 request.go:629] Waited for 195.445031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d46v5
	I0415 23:58:15.408240   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d46v5
	I0415 23:58:15.408245   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:15.408253   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:15.408258   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:15.412741   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:15.608519   25488 request.go:629] Waited for 194.910147ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:15.608582   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:15.608603   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:15.608614   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:15.608626   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:15.612026   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:15.612855   25488 pod_ready.go:92] pod "kube-proxy-d46v5" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:15.612875   25488 pod_ready.go:81] duration metric: took 400.176563ms for pod "kube-proxy-d46v5" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:15.612889   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vbfhn" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:15.808585   25488 request.go:629] Waited for 195.634248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbfhn
	I0415 23:58:15.808673   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vbfhn
	I0415 23:58:15.808685   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:15.808702   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:15.808712   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:15.812387   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:16.008600   25488 request.go:629] Waited for 195.378151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:58:16.008669   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:58:16.008674   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:16.008681   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:16.008687   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:16.012254   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:16.013049   25488 pod_ready.go:92] pod "kube-proxy-vbfhn" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:16.013073   25488 pod_ready.go:81] duration metric: took 400.175319ms for pod "kube-proxy-vbfhn" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:16.013085   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:16.208823   25488 request.go:629] Waited for 195.646579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782
	I0415 23:58:16.208899   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782
	I0415 23:58:16.208911   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:16.208922   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:16.208931   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:16.212616   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:16.408664   25488 request.go:629] Waited for 195.390521ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:16.408716   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782
	I0415 23:58:16.408722   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:16.408728   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:16.408733   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:16.412118   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:16.412978   25488 pod_ready.go:92] pod "kube-scheduler-ha-694782" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:16.412999   25488 pod_ready.go:81] duration metric: took 399.906718ms for pod "kube-scheduler-ha-694782" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:16.413008   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:16.608025   25488 request.go:629] Waited for 194.963642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782-m02
	I0415 23:58:16.608083   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782-m02
	I0415 23:58:16.608089   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:16.608115   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:16.608135   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:16.612560   25488 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0415 23:58:16.808660   25488 request.go:629] Waited for 195.358773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:58:16.808727   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m02
	I0415 23:58:16.808735   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:16.808744   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:16.808755   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:16.812045   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:16.812690   25488 pod_ready.go:92] pod "kube-scheduler-ha-694782-m02" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:16.812708   25488 pod_ready.go:81] duration metric: took 399.693364ms for pod "kube-scheduler-ha-694782-m02" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:16.812717   25488 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-694782-m03" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:17.008353   25488 request.go:629] Waited for 195.585806ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782-m03
	I0415 23:58:17.008430   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-694782-m03
	I0415 23:58:17.008444   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:17.008451   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:17.008458   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:17.011870   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:17.209014   25488 request.go:629] Waited for 196.370317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:17.209072   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes/ha-694782-m03
	I0415 23:58:17.209079   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:17.209088   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:17.209094   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:17.212479   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:17.213171   25488 pod_ready.go:92] pod "kube-scheduler-ha-694782-m03" in "kube-system" namespace has status "Ready":"True"
	I0415 23:58:17.213195   25488 pod_ready.go:81] duration metric: took 400.470661ms for pod "kube-scheduler-ha-694782-m03" in "kube-system" namespace to be "Ready" ...
	I0415 23:58:17.213208   25488 pod_ready.go:38] duration metric: took 12.398931095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0415 23:58:17.213224   25488 api_server.go:52] waiting for apiserver process to appear ...
	I0415 23:58:17.213273   25488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0415 23:58:17.228851   25488 api_server.go:72] duration metric: took 17.69600783s to wait for apiserver process to appear ...
	I0415 23:58:17.228872   25488 api_server.go:88] waiting for apiserver healthz status ...
	I0415 23:58:17.228888   25488 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I0415 23:58:17.234999   25488 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I0415 23:58:17.235050   25488 round_trippers.go:463] GET https://192.168.39.41:8443/version
	I0415 23:58:17.235054   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:17.235061   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:17.235069   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:17.236121   25488 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0415 23:58:17.236273   25488 api_server.go:141] control plane version: v1.29.3
	I0415 23:58:17.236289   25488 api_server.go:131] duration metric: took 7.411501ms to wait for apiserver health ...
	I0415 23:58:17.236296   25488 system_pods.go:43] waiting for kube-system pods to appear ...
	I0415 23:58:17.408739   25488 request.go:629] Waited for 172.34899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:58:17.408802   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:58:17.408810   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:17.408821   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:17.408832   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:17.416750   25488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 23:58:17.424979   25488 system_pods.go:59] 24 kube-system pods found
	I0415 23:58:17.425008   25488 system_pods.go:61] "coredns-76f75df574-4sgv4" [3c1f65c0-37b2-4c88-879b-68297e989d44] Running
	I0415 23:58:17.425014   25488 system_pods.go:61] "coredns-76f75df574-zdc8q" [6a7e1a29-8c75-4d1f-978b-471ac0adb888] Running
	I0415 23:58:17.425019   25488 system_pods.go:61] "etcd-ha-694782" [ca5444f7-8fe5-4165-a01b-9c9adba4ede0] Running
	I0415 23:58:17.425024   25488 system_pods.go:61] "etcd-ha-694782-m02" [821ace46-8aac-46ae-9e3f-7bc144bb46a9] Running
	I0415 23:58:17.425030   25488 system_pods.go:61] "etcd-ha-694782-m03" [ca51c45c-4bbf-48d8-91bd-f95a2c7ef894] Running
	I0415 23:58:17.425035   25488 system_pods.go:61] "kindnet-99cs7" [5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3] Running
	I0415 23:58:17.425041   25488 system_pods.go:61] "kindnet-hln6n" [da484432-677e-49d3-b01a-95b6392cceb9] Running
	I0415 23:58:17.425046   25488 system_pods.go:61] "kindnet-qvp8b" [04002e18-2673-4067-a10e-64f40e3c60c8] Running
	I0415 23:58:17.425054   25488 system_pods.go:61] "kube-apiserver-ha-694782" [42680d27-9926-4b99-ae33-61a37afe0207] Running
	I0415 23:58:17.425060   25488 system_pods.go:61] "kube-apiserver-ha-694782-m02" [5db36efa-244b-47e0-ba6f-93826468c168] Running
	I0415 23:58:17.425065   25488 system_pods.go:61] "kube-apiserver-ha-694782-m03" [1b573124-a8cd-4227-abfc-9f299843ec67] Running
	I0415 23:58:17.425072   25488 system_pods.go:61] "kube-controller-manager-ha-694782" [1832df1f-ac45-427c-93fc-04630558d7d1] Running
	I0415 23:58:17.425077   25488 system_pods.go:61] "kube-controller-manager-ha-694782-m02" [923c744c-e27c-468d-a14f-2a1de579df73] Running
	I0415 23:58:17.425083   25488 system_pods.go:61] "kube-controller-manager-ha-694782-m03" [b6b37886-5ac0-4e36-aef1-5df06f761cca] Running
	I0415 23:58:17.425092   25488 system_pods.go:61] "kube-proxy-45tb9" [c9f03669-c803-4ef2-9649-653cbd5ed50e] Running
	I0415 23:58:17.425098   25488 system_pods.go:61] "kube-proxy-d46v5" [c92235e6-1639-45c0-a92b-bf0cc32bea22] Running
	I0415 23:58:17.425105   25488 system_pods.go:61] "kube-proxy-vbfhn" [131197dd-aa5b-48c7-a0e8-d1772432b28c] Running
	I0415 23:58:17.425111   25488 system_pods.go:61] "kube-scheduler-ha-694782" [8e2ff44e-34ef-4cb6-9734-62004de985b8] Running
	I0415 23:58:17.425119   25488 system_pods.go:61] "kube-scheduler-ha-694782-m02" [e2452893-9792-41e9-9d9e-e2f66bc07303] Running
	I0415 23:58:17.425125   25488 system_pods.go:61] "kube-scheduler-ha-694782-m03" [9fb6255b-36f4-4f5f-8f20-3e7389ddbb55] Running
	I0415 23:58:17.425132   25488 system_pods.go:61] "kube-vip-ha-694782" [a8ffb1b9-f55e-4efe-b9a1-7e58a341a2f0] Running
	I0415 23:58:17.425138   25488 system_pods.go:61] "kube-vip-ha-694782-m02" [036ef70f-0af1-42a5-b0bb-5622785ff031] Running
	I0415 23:58:17.425143   25488 system_pods.go:61] "kube-vip-ha-694782-m03" [fc934534-c2d6-4454-93e1-8d8e2b791c72] Running
	I0415 23:58:17.425151   25488 system_pods.go:61] "storage-provisioner" [bea9c166-5f83-473f-8f01-335ea1436dad] Running
	I0415 23:58:17.425171   25488 system_pods.go:74] duration metric: took 188.868987ms to wait for pod list to return data ...
	I0415 23:58:17.425183   25488 default_sa.go:34] waiting for default service account to be created ...
	I0415 23:58:17.608582   25488 request.go:629] Waited for 183.32347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/default/serviceaccounts
	I0415 23:58:17.608653   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/default/serviceaccounts
	I0415 23:58:17.608661   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:17.608671   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:17.608677   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:17.612202   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:17.612534   25488 default_sa.go:45] found service account: "default"
	I0415 23:58:17.612555   25488 default_sa.go:55] duration metric: took 187.361301ms for default service account to be created ...
	I0415 23:58:17.612564   25488 system_pods.go:116] waiting for k8s-apps to be running ...
	I0415 23:58:17.808959   25488 request.go:629] Waited for 196.315133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:58:17.809036   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/namespaces/kube-system/pods
	I0415 23:58:17.809052   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:17.809062   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:17.809069   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:17.816661   25488 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0415 23:58:17.824338   25488 system_pods.go:86] 24 kube-system pods found
	I0415 23:58:17.824363   25488 system_pods.go:89] "coredns-76f75df574-4sgv4" [3c1f65c0-37b2-4c88-879b-68297e989d44] Running
	I0415 23:58:17.824370   25488 system_pods.go:89] "coredns-76f75df574-zdc8q" [6a7e1a29-8c75-4d1f-978b-471ac0adb888] Running
	I0415 23:58:17.824376   25488 system_pods.go:89] "etcd-ha-694782" [ca5444f7-8fe5-4165-a01b-9c9adba4ede0] Running
	I0415 23:58:17.824383   25488 system_pods.go:89] "etcd-ha-694782-m02" [821ace46-8aac-46ae-9e3f-7bc144bb46a9] Running
	I0415 23:58:17.824394   25488 system_pods.go:89] "etcd-ha-694782-m03" [ca51c45c-4bbf-48d8-91bd-f95a2c7ef894] Running
	I0415 23:58:17.824405   25488 system_pods.go:89] "kindnet-99cs7" [5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3] Running
	I0415 23:58:17.824412   25488 system_pods.go:89] "kindnet-hln6n" [da484432-677e-49d3-b01a-95b6392cceb9] Running
	I0415 23:58:17.824422   25488 system_pods.go:89] "kindnet-qvp8b" [04002e18-2673-4067-a10e-64f40e3c60c8] Running
	I0415 23:58:17.824431   25488 system_pods.go:89] "kube-apiserver-ha-694782" [42680d27-9926-4b99-ae33-61a37afe0207] Running
	I0415 23:58:17.824441   25488 system_pods.go:89] "kube-apiserver-ha-694782-m02" [5db36efa-244b-47e0-ba6f-93826468c168] Running
	I0415 23:58:17.824449   25488 system_pods.go:89] "kube-apiserver-ha-694782-m03" [1b573124-a8cd-4227-abfc-9f299843ec67] Running
	I0415 23:58:17.824459   25488 system_pods.go:89] "kube-controller-manager-ha-694782" [1832df1f-ac45-427c-93fc-04630558d7d1] Running
	I0415 23:58:17.824467   25488 system_pods.go:89] "kube-controller-manager-ha-694782-m02" [923c744c-e27c-468d-a14f-2a1de579df73] Running
	I0415 23:58:17.824475   25488 system_pods.go:89] "kube-controller-manager-ha-694782-m03" [b6b37886-5ac0-4e36-aef1-5df06f761cca] Running
	I0415 23:58:17.824484   25488 system_pods.go:89] "kube-proxy-45tb9" [c9f03669-c803-4ef2-9649-653cbd5ed50e] Running
	I0415 23:58:17.824494   25488 system_pods.go:89] "kube-proxy-d46v5" [c92235e6-1639-45c0-a92b-bf0cc32bea22] Running
	I0415 23:58:17.824500   25488 system_pods.go:89] "kube-proxy-vbfhn" [131197dd-aa5b-48c7-a0e8-d1772432b28c] Running
	I0415 23:58:17.824511   25488 system_pods.go:89] "kube-scheduler-ha-694782" [8e2ff44e-34ef-4cb6-9734-62004de985b8] Running
	I0415 23:58:17.824521   25488 system_pods.go:89] "kube-scheduler-ha-694782-m02" [e2452893-9792-41e9-9d9e-e2f66bc07303] Running
	I0415 23:58:17.824529   25488 system_pods.go:89] "kube-scheduler-ha-694782-m03" [9fb6255b-36f4-4f5f-8f20-3e7389ddbb55] Running
	I0415 23:58:17.824538   25488 system_pods.go:89] "kube-vip-ha-694782" [a8ffb1b9-f55e-4efe-b9a1-7e58a341a2f0] Running
	I0415 23:58:17.824545   25488 system_pods.go:89] "kube-vip-ha-694782-m02" [036ef70f-0af1-42a5-b0bb-5622785ff031] Running
	I0415 23:58:17.824553   25488 system_pods.go:89] "kube-vip-ha-694782-m03" [fc934534-c2d6-4454-93e1-8d8e2b791c72] Running
	I0415 23:58:17.824560   25488 system_pods.go:89] "storage-provisioner" [bea9c166-5f83-473f-8f01-335ea1436dad] Running
	I0415 23:58:17.824570   25488 system_pods.go:126] duration metric: took 211.994917ms to wait for k8s-apps to be running ...
	I0415 23:58:17.824583   25488 system_svc.go:44] waiting for kubelet service to be running ....
	I0415 23:58:17.824637   25488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0415 23:58:17.842104   25488 system_svc.go:56] duration metric: took 17.514994ms WaitForService to wait for kubelet
	I0415 23:58:17.842131   25488 kubeadm.go:576] duration metric: took 18.309289878s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0415 23:58:17.842152   25488 node_conditions.go:102] verifying NodePressure condition ...
	I0415 23:58:18.008527   25488 request.go:629] Waited for 166.310678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.41:8443/api/v1/nodes
	I0415 23:58:18.008611   25488 round_trippers.go:463] GET https://192.168.39.41:8443/api/v1/nodes
	I0415 23:58:18.008618   25488 round_trippers.go:469] Request Headers:
	I0415 23:58:18.008629   25488 round_trippers.go:473]     Accept: application/json, */*
	I0415 23:58:18.008642   25488 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0415 23:58:18.012474   25488 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0415 23:58:18.013480   25488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 23:58:18.013502   25488 node_conditions.go:123] node cpu capacity is 2
	I0415 23:58:18.013514   25488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 23:58:18.013518   25488 node_conditions.go:123] node cpu capacity is 2
	I0415 23:58:18.013522   25488 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0415 23:58:18.013525   25488 node_conditions.go:123] node cpu capacity is 2
	I0415 23:58:18.013529   25488 node_conditions.go:105] duration metric: took 171.372046ms to run NodePressure ...
	I0415 23:58:18.013541   25488 start.go:240] waiting for startup goroutines ...
	I0415 23:58:18.013564   25488 start.go:254] writing updated cluster config ...
	I0415 23:58:18.013821   25488 ssh_runner.go:195] Run: rm -f paused
	I0415 23:58:18.064114   25488 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0415 23:58:18.066256   25488 out.go:177] * Done! kubectl is now configured to use "ha-694782" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.845651816Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713225770845627993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b97416cf-0f63-4840-9149-0b1f71a4d060 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.846366793Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c2a3e32-e1df-46bf-b0d6-0cc58b9d6415 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.846448182Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c2a3e32-e1df-46bf-b0d6-0cc58b9d6415 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.846770191Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10abaa8fc3a416f4f6e6af525fcc65e0613ea769d731660a81e4e6a425fa4d6c,PodSandboxId:df7bc8cc3af912521d7dab8c802c0b04f7447ccb3d192040071875ff6a6ed89d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713225502009847781,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kubernetes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d,PodSandboxId:773aba8a13222bacf0c0e79c78ec31764b5af16b9bc416140f303b36465cce2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225349858970939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2,PodSandboxId:cc571f90808ddcdef413b709640e27f67d9d861628a9d232886db9a496a57712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225349824904827,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22f9f76ea741300482eff1b6fe9db3f26a6e24069fb874c4ddd33c655294e62,PodSandboxId:d0206b8339037f202916be2337347086cc6265ba7391f3c217e691a994687c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1713225348406059422,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e00269a54857a2b49811e69012788039b429be3725a79bdb0a6e999aff448e,PodSandboxId:cf834489f460fbbaf59a25b280f8f70c16044f0f394b559e98c90fccc35d4837,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713225
346494199084,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7,PodSandboxId:f34915e87e4008b765d7b34d6619b29c22eddc157e2e96893518ff9709538560,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713225346210700164,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8c32adffdfe920d33a3d2aadb4e5d70c83321d5e0ed04b5e651b3338f8868c,PodSandboxId:d03541f025672fb33a16b0c006378393944751455fb18a91620e4822b1cf32da,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713225329553755138,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fb76c3bc27f5d0b4f45ad31d74d371,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5,PodSandboxId:41e04a0d8a0ba492c448f0c8d919cb86eb887cc0a8198d99815e7f7eed50b944,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713225326796796969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf,PodSandboxId:cc8f87bd6e0dc433462a51cd028d6d774aa14a6c762f3f6a79999daea3870547,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713225326704495869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler
-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d4ea2215ec6217956d87feb4c68ad8ace3136456a7bc720dcc7c721b87f66f4,PodSandboxId:886d00021f1d02da690c8d485521dfcbcd8e54b07e8b49c670f226a2a48b58ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713225326714963944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a682dce5ef12dd6c80bdebd8ef67c034ebd1c88d5e144fc177805ad5eb35efe,PodSandboxId:21503f860be6fcca82242fc07c3e7179eba1f673a404e7d4c668628e99247da5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713225326702363925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c2a3e32-e1df-46bf-b0d6-0cc58b9d6415 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.887684041Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d53700e-974d-4cb7-8460-874529d0cf3b name=/runtime.v1.RuntimeService/Version
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.887776422Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d53700e-974d-4cb7-8460-874529d0cf3b name=/runtime.v1.RuntimeService/Version
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.889120238Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e7822154-f633-43df-8ebf-ee64ee149b16 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.889507690Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713225770889486470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7822154-f633-43df-8ebf-ee64ee149b16 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.890296822Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d38fd72-836e-458d-aff7-75e0340b422e name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.890348900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d38fd72-836e-458d-aff7-75e0340b422e name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.890572539Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10abaa8fc3a416f4f6e6af525fcc65e0613ea769d731660a81e4e6a425fa4d6c,PodSandboxId:df7bc8cc3af912521d7dab8c802c0b04f7447ccb3d192040071875ff6a6ed89d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713225502009847781,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kubernetes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d,PodSandboxId:773aba8a13222bacf0c0e79c78ec31764b5af16b9bc416140f303b36465cce2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225349858970939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2,PodSandboxId:cc571f90808ddcdef413b709640e27f67d9d861628a9d232886db9a496a57712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225349824904827,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22f9f76ea741300482eff1b6fe9db3f26a6e24069fb874c4ddd33c655294e62,PodSandboxId:d0206b8339037f202916be2337347086cc6265ba7391f3c217e691a994687c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1713225348406059422,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e00269a54857a2b49811e69012788039b429be3725a79bdb0a6e999aff448e,PodSandboxId:cf834489f460fbbaf59a25b280f8f70c16044f0f394b559e98c90fccc35d4837,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713225
346494199084,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7,PodSandboxId:f34915e87e4008b765d7b34d6619b29c22eddc157e2e96893518ff9709538560,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713225346210700164,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8c32adffdfe920d33a3d2aadb4e5d70c83321d5e0ed04b5e651b3338f8868c,PodSandboxId:d03541f025672fb33a16b0c006378393944751455fb18a91620e4822b1cf32da,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713225329553755138,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fb76c3bc27f5d0b4f45ad31d74d371,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5,PodSandboxId:41e04a0d8a0ba492c448f0c8d919cb86eb887cc0a8198d99815e7f7eed50b944,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713225326796796969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf,PodSandboxId:cc8f87bd6e0dc433462a51cd028d6d774aa14a6c762f3f6a79999daea3870547,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713225326704495869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler
-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d4ea2215ec6217956d87feb4c68ad8ace3136456a7bc720dcc7c721b87f66f4,PodSandboxId:886d00021f1d02da690c8d485521dfcbcd8e54b07e8b49c670f226a2a48b58ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713225326714963944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a682dce5ef12dd6c80bdebd8ef67c034ebd1c88d5e144fc177805ad5eb35efe,PodSandboxId:21503f860be6fcca82242fc07c3e7179eba1f673a404e7d4c668628e99247da5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713225326702363925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d38fd72-836e-458d-aff7-75e0340b422e name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.926572479Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=40c85c4b-b260-4f85-b1d9-30234c015d3b name=/runtime.v1.RuntimeService/Version
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.926659520Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=40c85c4b-b260-4f85-b1d9-30234c015d3b name=/runtime.v1.RuntimeService/Version
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.927634023Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6c604c10-f7b2-4a80-a5f4-40344a8e5642 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.928243686Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713225770928217016,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6c604c10-f7b2-4a80-a5f4-40344a8e5642 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.928866317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=63b24c27-30ea-4138-9930-57ee1465da4b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.928938378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=63b24c27-30ea-4138-9930-57ee1465da4b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.929220297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10abaa8fc3a416f4f6e6af525fcc65e0613ea769d731660a81e4e6a425fa4d6c,PodSandboxId:df7bc8cc3af912521d7dab8c802c0b04f7447ccb3d192040071875ff6a6ed89d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713225502009847781,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kubernetes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d,PodSandboxId:773aba8a13222bacf0c0e79c78ec31764b5af16b9bc416140f303b36465cce2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225349858970939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2,PodSandboxId:cc571f90808ddcdef413b709640e27f67d9d861628a9d232886db9a496a57712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225349824904827,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22f9f76ea741300482eff1b6fe9db3f26a6e24069fb874c4ddd33c655294e62,PodSandboxId:d0206b8339037f202916be2337347086cc6265ba7391f3c217e691a994687c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1713225348406059422,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e00269a54857a2b49811e69012788039b429be3725a79bdb0a6e999aff448e,PodSandboxId:cf834489f460fbbaf59a25b280f8f70c16044f0f394b559e98c90fccc35d4837,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713225
346494199084,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7,PodSandboxId:f34915e87e4008b765d7b34d6619b29c22eddc157e2e96893518ff9709538560,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713225346210700164,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8c32adffdfe920d33a3d2aadb4e5d70c83321d5e0ed04b5e651b3338f8868c,PodSandboxId:d03541f025672fb33a16b0c006378393944751455fb18a91620e4822b1cf32da,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713225329553755138,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fb76c3bc27f5d0b4f45ad31d74d371,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5,PodSandboxId:41e04a0d8a0ba492c448f0c8d919cb86eb887cc0a8198d99815e7f7eed50b944,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713225326796796969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf,PodSandboxId:cc8f87bd6e0dc433462a51cd028d6d774aa14a6c762f3f6a79999daea3870547,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713225326704495869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler
-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d4ea2215ec6217956d87feb4c68ad8ace3136456a7bc720dcc7c721b87f66f4,PodSandboxId:886d00021f1d02da690c8d485521dfcbcd8e54b07e8b49c670f226a2a48b58ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713225326714963944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a682dce5ef12dd6c80bdebd8ef67c034ebd1c88d5e144fc177805ad5eb35efe,PodSandboxId:21503f860be6fcca82242fc07c3e7179eba1f673a404e7d4c668628e99247da5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713225326702363925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=63b24c27-30ea-4138-9930-57ee1465da4b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.968594661Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f1a30a6-5745-47cf-86dd-d6679f5309ab name=/runtime.v1.RuntimeService/Version
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.968814831Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f1a30a6-5745-47cf-86dd-d6679f5309ab name=/runtime.v1.RuntimeService/Version
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.969867148Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=766df250-3b68-4ae4-af71-c5c1af844d3f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.970499048Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713225770970476819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=766df250-3b68-4ae4-af71-c5c1af844d3f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.971271306Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f19f3c5-2535-4cf4-98c7-749bd8b0afeb name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.971341107Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f19f3c5-2535-4cf4-98c7-749bd8b0afeb name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:02:50 ha-694782 crio[683]: time="2024-04-16 00:02:50.971644512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:10abaa8fc3a416f4f6e6af525fcc65e0613ea769d731660a81e4e6a425fa4d6c,PodSandboxId:df7bc8cc3af912521d7dab8c802c0b04f7447ccb3d192040071875ff6a6ed89d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713225502009847781,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kubernetes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d,PodSandboxId:773aba8a13222bacf0c0e79c78ec31764b5af16b9bc416140f303b36465cce2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225349858970939,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2,PodSandboxId:cc571f90808ddcdef413b709640e27f67d9d861628a9d232886db9a496a57712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225349824904827,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid
: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22f9f76ea741300482eff1b6fe9db3f26a6e24069fb874c4ddd33c655294e62,PodSandboxId:d0206b8339037f202916be2337347086cc6265ba7391f3c217e691a994687c4f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNI
NG,CreatedAt:1713225348406059422,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33e00269a54857a2b49811e69012788039b429be3725a79bdb0a6e999aff448e,PodSandboxId:cf834489f460fbbaf59a25b280f8f70c16044f0f394b559e98c90fccc35d4837,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713225
346494199084,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7,PodSandboxId:f34915e87e4008b765d7b34d6619b29c22eddc157e2e96893518ff9709538560,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713225346210700164,Labels:map[string]stri
ng{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8c32adffdfe920d33a3d2aadb4e5d70c83321d5e0ed04b5e651b3338f8868c,PodSandboxId:d03541f025672fb33a16b0c006378393944751455fb18a91620e4822b1cf32da,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713225329553755138,Labels:map[string]string
{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fb76c3bc27f5d0b4f45ad31d74d371,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5,PodSandboxId:41e04a0d8a0ba492c448f0c8d919cb86eb887cc0a8198d99815e7f7eed50b944,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713225326796796969,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kube
rnetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf,PodSandboxId:cc8f87bd6e0dc433462a51cd028d6d774aa14a6c762f3f6a79999daea3870547,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713225326704495869,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler
-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d4ea2215ec6217956d87feb4c68ad8ace3136456a7bc720dcc7c721b87f66f4,PodSandboxId:886d00021f1d02da690c8d485521dfcbcd8e54b07e8b49c670f226a2a48b58ff,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713225326714963944,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kuberne
tes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a682dce5ef12dd6c80bdebd8ef67c034ebd1c88d5e144fc177805ad5eb35efe,PodSandboxId:21503f860be6fcca82242fc07c3e7179eba1f673a404e7d4c668628e99247da5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713225326702363925,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f19f3c5-2535-4cf4-98c7-749bd8b0afeb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	10abaa8fc3a41       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   df7bc8cc3af91       busybox-7fdf7869d9-vsvrq
	a62edf63e9633       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   773aba8a13222       coredns-76f75df574-zdc8q
	b3a501d70f72c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   cc571f90808dd       coredns-76f75df574-4sgv4
	c22f9f76ea741       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   d0206b8339037       storage-provisioner
	33e00269a5485       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      7 minutes ago       Running             kindnet-cni               0                   cf834489f460f       kindnet-99cs7
	b55cb00c20162       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      7 minutes ago       Running             kube-proxy                0                   f34915e87e400       kube-proxy-d46v5
	9f8c32adffdfe       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Running             kube-vip                  0                   d03541f025672       kube-vip-ha-694782
	9d17ec84664ef       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   41e04a0d8a0ba       etcd-ha-694782
	7d4ea2215ec62       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      7 minutes ago       Running             kube-apiserver            0                   886d00021f1d0       kube-apiserver-ha-694782
	553d7f07f43e6       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      7 minutes ago       Running             kube-scheduler            0                   cc8f87bd6e0dc       kube-scheduler-ha-694782
	8a682dce5ef12       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      7 minutes ago       Running             kube-controller-manager   0                   21503f860be6f       kube-controller-manager-ha-694782
	
	
	==> coredns [a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d] <==
	[INFO] 10.244.0.4:43820 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171346s
	[INFO] 10.244.0.4:53971 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000064209s
	[INFO] 10.244.0.4:58655 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000057875s
	[INFO] 10.244.1.2:57138 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000292175s
	[INFO] 10.244.1.2:42990 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.007959183s
	[INFO] 10.244.1.2:53242 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142606s
	[INFO] 10.244.1.2:53591 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169859s
	[INFO] 10.244.2.2:56926 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001802242s
	[INFO] 10.244.2.2:55053 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174333s
	[INFO] 10.244.2.2:56210 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000166019s
	[INFO] 10.244.2.2:36533 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001257882s
	[INFO] 10.244.0.4:39112 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127586s
	[INFO] 10.244.0.4:33597 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001242421s
	[INFO] 10.244.0.4:37595 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130691s
	[INFO] 10.244.0.4:36939 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000030566s
	[INFO] 10.244.0.4:36468 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043404s
	[INFO] 10.244.1.2:46854 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000237116s
	[INFO] 10.244.1.2:35618 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139683s
	[INFO] 10.244.2.2:54137 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000211246s
	[INFO] 10.244.2.2:57833 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097841s
	[INFO] 10.244.0.4:45317 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099201s
	[INFO] 10.244.1.2:46870 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000160521s
	[INFO] 10.244.1.2:49971 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118112s
	[INFO] 10.244.2.2:60977 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000163482s
	[INFO] 10.244.0.4:57367 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078337s
	
	
	==> coredns [b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2] <==
	[INFO] 10.244.1.2:51264 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003372306s
	[INFO] 10.244.1.2:40116 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000306198s
	[INFO] 10.244.1.2:43171 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000166993s
	[INFO] 10.244.2.2:55011 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111047s
	[INFO] 10.244.2.2:60878 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096803s
	[INFO] 10.244.2.2:40329 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153524s
	[INFO] 10.244.2.2:43908 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109424s
	[INFO] 10.244.0.4:40588 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117575s
	[INFO] 10.244.0.4:34558 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001805219s
	[INFO] 10.244.0.4:44168 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194119s
	[INFO] 10.244.1.2:54750 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108471s
	[INFO] 10.244.1.2:46261 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008003s
	[INFO] 10.244.2.2:53899 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130847s
	[INFO] 10.244.2.2:52030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082631s
	[INFO] 10.244.0.4:39295 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000069381s
	[INFO] 10.244.0.4:38441 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054252s
	[INFO] 10.244.0.4:40273 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054634s
	[INFO] 10.244.1.2:56481 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181468s
	[INFO] 10.244.1.2:34800 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000244392s
	[INFO] 10.244.2.2:40684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136775s
	[INFO] 10.244.2.2:50964 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000154855s
	[INFO] 10.244.2.2:46132 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089888s
	[INFO] 10.244.0.4:34246 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124283s
	[INFO] 10.244.0.4:53924 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125381s
	[INFO] 10.244.0.4:36636 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000079286s
	
	
	==> describe nodes <==
	Name:               ha-694782
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-694782
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=ha-694782
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T23_55_34_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 23:55:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-694782
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:02:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 23:58:37 +0000   Mon, 15 Apr 2024 23:55:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 23:58:37 +0000   Mon, 15 Apr 2024 23:55:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 23:58:37 +0000   Mon, 15 Apr 2024 23:55:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 23:58:37 +0000   Mon, 15 Apr 2024 23:55:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    ha-694782
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3887d262ea345b0b06d0cfe81d3c704
	  System UUID:                e3887d26-2ea3-45b0-b06d-0cfe81d3c704
	  Boot ID:                    db04bec2-a6d7-4f51-8173-a431f51db6a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-vsvrq             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 coredns-76f75df574-4sgv4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m6s
	  kube-system                 coredns-76f75df574-zdc8q             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m6s
	  kube-system                 etcd-ha-694782                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m18s
	  kube-system                 kindnet-99cs7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m6s
	  kube-system                 kube-apiserver-ha-694782             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	  kube-system                 kube-controller-manager-ha-694782    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	  kube-system                 kube-proxy-d46v5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m6s
	  kube-system                 kube-scheduler-ha-694782             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	  kube-system                 kube-vip-ha-694782                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m4s                   kube-proxy       
	  Normal  NodeHasSufficientPID     7m25s (x7 over 7m25s)  kubelet          Node ha-694782 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m25s (x8 over 7m25s)  kubelet          Node ha-694782 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m25s (x8 over 7m25s)  kubelet          Node ha-694782 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m18s                  kubelet          Node ha-694782 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m18s                  kubelet          Node ha-694782 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m18s                  kubelet          Node ha-694782 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m7s                   node-controller  Node ha-694782 event: Registered Node ha-694782 in Controller
	  Normal  NodeReady                7m4s                   kubelet          Node ha-694782 status is now: NodeReady
	  Normal  RegisteredNode           5m48s                  node-controller  Node ha-694782 event: Registered Node ha-694782 in Controller
	  Normal  RegisteredNode           4m40s                  node-controller  Node ha-694782 event: Registered Node ha-694782 in Controller
	
	
	Name:               ha-694782-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-694782-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=ha-694782
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_15T23_56_49_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 23:56:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-694782-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 15 Apr 2024 23:59:18 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 15 Apr 2024 23:58:48 +0000   Tue, 16 Apr 2024 00:00:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 15 Apr 2024 23:58:48 +0000   Tue, 16 Apr 2024 00:00:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 15 Apr 2024 23:58:48 +0000   Tue, 16 Apr 2024 00:00:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 15 Apr 2024 23:58:48 +0000   Tue, 16 Apr 2024 00:00:00 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.42
	  Hostname:    ha-694782-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f33e7ca96e8a461196cc015dc9cdb390
	  System UUID:                f33e7ca9-6e8a-4611-96cc-015dc9cdb390
	  Boot ID:                    5d204aeb-b0bb-47ab-8d6e-e6870264d97b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-bwtdm                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 etcd-ha-694782-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m5s
	  kube-system                 kindnet-qvp8b                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m6s
	  kube-system                 kube-apiserver-ha-694782-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-controller-manager-ha-694782-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-proxy-vbfhn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m6s
	  kube-system                 kube-scheduler-ha-694782-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	  kube-system                 kube-vip-ha-694782-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  6m6s (x8 over 6m6s)  kubelet          Node ha-694782-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s (x8 over 6m6s)  kubelet          Node ha-694782-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s (x7 over 6m6s)  kubelet          Node ha-694782-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m1s                 node-controller  Node ha-694782-m02 event: Registered Node ha-694782-m02 in Controller
	  Normal  RegisteredNode           5m48s                node-controller  Node ha-694782-m02 event: Registered Node ha-694782-m02 in Controller
	  Normal  RegisteredNode           4m40s                node-controller  Node ha-694782-m02 event: Registered Node ha-694782-m02 in Controller
	  Normal  NodeNotReady             2m51s                node-controller  Node ha-694782-m02 status is now: NodeNotReady
	
	
	Name:               ha-694782-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-694782-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=ha-694782
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_15T23_57_59_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 23:57:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-694782-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:02:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 23:58:24 +0000   Mon, 15 Apr 2024 23:57:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 23:58:24 +0000   Mon, 15 Apr 2024 23:57:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 23:58:24 +0000   Mon, 15 Apr 2024 23:57:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 23:58:24 +0000   Mon, 15 Apr 2024 23:58:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.202
	  Hostname:    ha-694782-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7c9ed2323d6414d9d68e5da836956f9
	  System UUID:                f7c9ed23-23d6-414d-9d68-e5da836956f9
	  Boot ID:                    f0a011f0-aa05-4a51-9cac-8a89ff51f5fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-mxz6n                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 etcd-ha-694782-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m55s
	  kube-system                 kindnet-hln6n                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m58s
	  kube-system                 kube-apiserver-ha-694782-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-controller-manager-ha-694782-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  kube-system                 kube-proxy-45tb9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-scheduler-ha-694782-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 kube-vip-ha-694782-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m53s                  kube-proxy       
	  Normal  RegisteredNode           4m58s                  node-controller  Node ha-694782-m03 event: Registered Node ha-694782-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m58s (x8 over 4m58s)  kubelet          Node ha-694782-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m58s (x8 over 4m58s)  kubelet          Node ha-694782-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m58s (x7 over 4m58s)  kubelet          Node ha-694782-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m56s                  node-controller  Node ha-694782-m03 event: Registered Node ha-694782-m03 in Controller
	  Normal  RegisteredNode           4m40s                  node-controller  Node ha-694782-m03 event: Registered Node ha-694782-m03 in Controller
	
	
	Name:               ha-694782-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-694782-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=ha-694782
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_15T23_58_56_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 23:58:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-694782-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:02:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 15 Apr 2024 23:59:26 +0000   Mon, 15 Apr 2024 23:58:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 15 Apr 2024 23:59:26 +0000   Mon, 15 Apr 2024 23:58:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 15 Apr 2024 23:59:26 +0000   Mon, 15 Apr 2024 23:58:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 15 Apr 2024 23:59:26 +0000   Mon, 15 Apr 2024 23:59:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    ha-694782-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aceb25acc5a84fcca647b3b66273edbd
	  System UUID:                aceb25ac-c5a8-4fcc-a647-b3b66273edbd
	  Boot ID:                    43559327-63c3-4af4-bb7f-6d674d6e1c03
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-k6vbr       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m55s
	  kube-system                 kube-proxy-mgwnv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m50s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m56s (x2 over 3m56s)  kubelet          Node ha-694782-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s (x2 over 3m56s)  kubelet          Node ha-694782-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s (x2 over 3m56s)  kubelet          Node ha-694782-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m55s                  node-controller  Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller
	  Normal  NodeAllocatableEnforced  3m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller
	  Normal  NodeReady                3m45s                  kubelet          Node ha-694782-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr15 23:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052003] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040460] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.533828] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Apr15 23:55] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.613458] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.744931] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.056422] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063543] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.160170] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.142019] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.294658] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.386960] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.057175] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.857368] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +1.229656] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.643467] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.095801] kauditd_printk_skb: 40 callbacks suppressed
	[ +12.797173] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.821758] kauditd_printk_skb: 72 callbacks suppressed
	
	
	==> etcd [9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5] <==
	{"level":"warn","ts":"2024-04-16T00:02:50.729732Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:50.830413Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.226548Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.230348Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.234851Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.238177Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.255026Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.264245Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.273196Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.277582Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.282065Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.292369Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.298505Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.304259Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.307257Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.31106Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.323116Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.328572Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.329872Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.335177Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.338258Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.342164Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.349096Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.354709Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-04-16T00:02:51.360503Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"903e0dada8362847","from":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 00:02:51 up 7 min,  0 users,  load average: 0.19, 0.26, 0.17
	Linux ha-694782 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [33e00269a54857a2b49811e69012788039b429be3725a79bdb0a6e999aff448e] <==
	I0416 00:02:17.902819       1 main.go:250] Node ha-694782-m04 has CIDR [10.244.3.0/24] 
	I0416 00:02:27.918676       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0416 00:02:27.918894       1 main.go:227] handling current node
	I0416 00:02:27.918928       1 main.go:223] Handling node with IPs: map[192.168.39.42:{}]
	I0416 00:02:27.920476       1 main.go:250] Node ha-694782-m02 has CIDR [10.244.1.0/24] 
	I0416 00:02:27.920849       1 main.go:223] Handling node with IPs: map[192.168.39.202:{}]
	I0416 00:02:27.920950       1 main.go:250] Node ha-694782-m03 has CIDR [10.244.2.0/24] 
	I0416 00:02:27.921733       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0416 00:02:27.921871       1 main.go:250] Node ha-694782-m04 has CIDR [10.244.3.0/24] 
	I0416 00:02:37.936413       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0416 00:02:37.936608       1 main.go:227] handling current node
	I0416 00:02:37.936651       1 main.go:223] Handling node with IPs: map[192.168.39.42:{}]
	I0416 00:02:37.936672       1 main.go:250] Node ha-694782-m02 has CIDR [10.244.1.0/24] 
	I0416 00:02:37.936780       1 main.go:223] Handling node with IPs: map[192.168.39.202:{}]
	I0416 00:02:37.936798       1 main.go:250] Node ha-694782-m03 has CIDR [10.244.2.0/24] 
	I0416 00:02:37.936862       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0416 00:02:37.936881       1 main.go:250] Node ha-694782-m04 has CIDR [10.244.3.0/24] 
	I0416 00:02:47.951290       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0416 00:02:47.951393       1 main.go:227] handling current node
	I0416 00:02:47.951427       1 main.go:223] Handling node with IPs: map[192.168.39.42:{}]
	I0416 00:02:47.951451       1 main.go:250] Node ha-694782-m02 has CIDR [10.244.1.0/24] 
	I0416 00:02:47.951602       1 main.go:223] Handling node with IPs: map[192.168.39.202:{}]
	I0416 00:02:47.951636       1 main.go:250] Node ha-694782-m03 has CIDR [10.244.2.0/24] 
	I0416 00:02:47.951714       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0416 00:02:47.951743       1 main.go:250] Node ha-694782-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7d4ea2215ec6217956d87feb4c68ad8ace3136456a7bc720dcc7c721b87f66f4] <==
	I0415 23:55:29.670734       1 shared_informer.go:318] Caches are synced for configmaps
	I0415 23:55:29.670878       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0415 23:55:29.670945       1 aggregator.go:165] initial CRD sync complete...
	I0415 23:55:29.670952       1 autoregister_controller.go:141] Starting autoregister controller
	I0415 23:55:29.670956       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0415 23:55:29.670960       1 cache.go:39] Caches are synced for autoregister controller
	I0415 23:55:29.672331       1 controller.go:624] quota admission added evaluator for: namespaces
	E0415 23:55:29.812092       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0415 23:55:29.812727       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0415 23:55:29.918840       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0415 23:55:30.572594       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0415 23:55:30.578644       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0415 23:55:30.579306       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0415 23:55:31.198256       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0415 23:55:31.246337       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0415 23:55:31.419633       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0415 23:55:31.427274       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.41]
	I0415 23:55:31.428318       1 controller.go:624] quota admission added evaluator for: endpoints
	I0415 23:55:31.432531       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0415 23:55:31.663618       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0415 23:55:33.380844       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0415 23:55:33.398693       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0415 23:55:33.417401       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0415 23:55:45.411712       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0415 23:55:45.621074       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8a682dce5ef12dd6c80bdebd8ef67c034ebd1c88d5e144fc177805ad5eb35efe] <==
	I0415 23:58:23.762552       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="12.256023ms"
	I0415 23:58:23.762667       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="52.183µs"
	I0415 23:58:56.024795       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-694782-m04\" does not exist"
	I0415 23:58:56.067843       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mgwnv"
	I0415 23:58:56.074898       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k6vbr"
	I0415 23:58:56.081101       1 range_allocator.go:380] "Set node PodCIDR" node="ha-694782-m04" podCIDRs=["10.244.3.0/24"]
	I0415 23:58:56.197877       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-2brgt"
	I0415 23:58:56.212568       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-h2nvb"
	I0415 23:58:56.294753       1 event.go:376] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-gh9x2"
	I0415 23:58:56.321809       1 event.go:376] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-8kkvs"
	I0415 23:59:00.140788       1 event.go:376] "Event occurred" object="ha-694782-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller"
	I0415 23:59:00.155111       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-694782-m04"
	I0415 23:59:06.052830       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-694782-m04"
	I0416 00:00:00.182615       1 event.go:376] "Event occurred" object="ha-694782-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-694782-m02 status is now: NodeNotReady"
	I0416 00:00:00.183530       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-694782-m04"
	I0416 00:00:00.210373       1 event.go:376] "Event occurred" object="kube-system/kube-controller-manager-ha-694782-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:00:00.224910       1 event.go:376] "Event occurred" object="kube-system/kube-scheduler-ha-694782-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:00:00.235421       1 event.go:376] "Event occurred" object="kube-system/kube-apiserver-ha-694782-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:00:00.250267       1 event.go:376] "Event occurred" object="kube-system/etcd-ha-694782-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:00:00.267440       1 event.go:376] "Event occurred" object="kube-system/kube-vip-ha-694782-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:00:00.279200       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-bwtdm" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:00:00.298614       1 event.go:376] "Event occurred" object="kube-system/kindnet-qvp8b" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:00:00.321143       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-vbfhn" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:00:00.323713       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="44.017204ms"
	I0416 00:00:00.323965       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="108.433µs"
	
	
	==> kube-proxy [b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7] <==
	I0415 23:55:46.624679       1 server_others.go:72] "Using iptables proxy"
	I0415 23:55:46.653543       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.41"]
	I0415 23:55:46.725353       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0415 23:55:46.725372       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0415 23:55:46.725384       1 server_others.go:168] "Using iptables Proxier"
	I0415 23:55:46.730419       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0415 23:55:46.730581       1 server.go:865] "Version info" version="v1.29.3"
	I0415 23:55:46.730600       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0415 23:55:46.732809       1 config.go:188] "Starting service config controller"
	I0415 23:55:46.733116       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0415 23:55:46.733144       1 config.go:97] "Starting endpoint slice config controller"
	I0415 23:55:46.733149       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0415 23:55:46.733951       1 config.go:315] "Starting node config controller"
	I0415 23:55:46.733961       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0415 23:55:46.835103       1 shared_informer.go:318] Caches are synced for node config
	I0415 23:55:46.835137       1 shared_informer.go:318] Caches are synced for service config
	I0415 23:55:46.835157       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf] <==
	W0415 23:55:30.600571       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0415 23:55:30.600595       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0415 23:55:30.642082       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0415 23:55:30.642208       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0415 23:55:30.655852       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0415 23:55:30.656108       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0415 23:55:30.722714       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0415 23:55:30.722781       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0415 23:55:30.757758       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0415 23:55:30.757863       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0415 23:55:30.786379       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0415 23:55:30.786428       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0415 23:55:30.858574       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0415 23:55:30.858603       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0415 23:55:30.892647       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0415 23:55:30.892743       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0415 23:55:32.669448       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0415 23:57:53.766559       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-hlb2j\": pod kube-proxy-hlb2j is already assigned to node \"ha-694782-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-hlb2j" node="ha-694782-m03"
	E0415 23:57:53.766722       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod f47bcc6b-e6f8-4a54-8ac4-67c188acf8aa(kube-system/kube-proxy-hlb2j) wasn't assumed so cannot be forgotten"
	E0415 23:57:53.766801       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-hlb2j\": pod kube-proxy-hlb2j is already assigned to node \"ha-694782-m03\"" pod="kube-system/kube-proxy-hlb2j"
	I0415 23:57:53.766864       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-hlb2j" node="ha-694782-m03"
	E0415 23:58:56.095615       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-mgwnv\": pod kube-proxy-mgwnv is already assigned to node \"ha-694782-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-mgwnv" node="ha-694782-m04"
	E0415 23:58:56.095678       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod af3ebac5-b22c-4783-83f7-63f1b57b9f86(kube-system/kube-proxy-mgwnv) wasn't assumed so cannot be forgotten"
	E0415 23:58:56.095707       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-mgwnv\": pod kube-proxy-mgwnv is already assigned to node \"ha-694782-m04\"" pod="kube-system/kube-proxy-mgwnv"
	I0415 23:58:56.095723       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-mgwnv" node="ha-694782-m04"
	
	
	==> kubelet <==
	Apr 15 23:58:33 ha-694782 kubelet[1369]: E0415 23:58:33.657345    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 23:58:33 ha-694782 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 23:58:33 ha-694782 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 23:58:33 ha-694782 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 23:58:33 ha-694782 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 15 23:59:33 ha-694782 kubelet[1369]: E0415 23:59:33.657638    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 15 23:59:33 ha-694782 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 15 23:59:33 ha-694782 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 15 23:59:33 ha-694782 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 15 23:59:33 ha-694782 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 00:00:33 ha-694782 kubelet[1369]: E0416 00:00:33.657855    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 00:00:33 ha-694782 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 00:00:33 ha-694782 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 00:00:33 ha-694782 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 00:00:33 ha-694782 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 00:01:33 ha-694782 kubelet[1369]: E0416 00:01:33.656753    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 00:01:33 ha-694782 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 00:01:33 ha-694782 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 00:01:33 ha-694782 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 00:01:33 ha-694782 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 00:02:33 ha-694782 kubelet[1369]: E0416 00:02:33.659493    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 00:02:33 ha-694782 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 00:02:33 ha-694782 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 00:02:33 ha-694782 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 00:02:33 ha-694782 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-694782 -n ha-694782
helpers_test.go:261: (dbg) Run:  kubectl --context ha-694782 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (60.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (371.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-694782 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-694782 -v=7 --alsologtostderr
E0416 00:03:58.680603   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0416 00:04:26.364325   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-694782 -v=7 --alsologtostderr: exit status 82 (2m1.935028031s)

                                                
                                                
-- stdout --
	* Stopping node "ha-694782-m04"  ...
	* Stopping node "ha-694782-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:02:52.868483   31239 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:02:52.868710   31239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:02:52.868720   31239 out.go:304] Setting ErrFile to fd 2...
	I0416 00:02:52.868724   31239 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:02:52.868907   31239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:02:52.869126   31239 out.go:298] Setting JSON to false
	I0416 00:02:52.869227   31239 mustload.go:65] Loading cluster: ha-694782
	I0416 00:02:52.869578   31239 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:02:52.869667   31239 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0416 00:02:52.869831   31239 mustload.go:65] Loading cluster: ha-694782
	I0416 00:02:52.869960   31239 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:02:52.869992   31239 stop.go:39] StopHost: ha-694782-m04
	I0416 00:02:52.870380   31239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:52.870416   31239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:52.884409   31239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44945
	I0416 00:02:52.884882   31239 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:52.885509   31239 main.go:141] libmachine: Using API Version  1
	I0416 00:02:52.885529   31239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:52.885877   31239 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:52.888355   31239 out.go:177] * Stopping node "ha-694782-m04"  ...
	I0416 00:02:52.889491   31239 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0416 00:02:52.889511   31239 main.go:141] libmachine: (ha-694782-m04) Calling .DriverName
	I0416 00:02:52.889702   31239 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0416 00:02:52.889727   31239 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHHostname
	I0416 00:02:52.892254   31239 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:52.892696   31239 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:58:43 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:02:52.892736   31239 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:02:52.892864   31239 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHPort
	I0416 00:02:52.892985   31239 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHKeyPath
	I0416 00:02:52.893091   31239 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHUsername
	I0416 00:02:52.893243   31239 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m04/id_rsa Username:docker}
	I0416 00:02:52.975890   31239 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0416 00:02:53.028802   31239 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0416 00:02:53.082149   31239 main.go:141] libmachine: Stopping "ha-694782-m04"...
	I0416 00:02:53.082185   31239 main.go:141] libmachine: (ha-694782-m04) Calling .GetState
	I0416 00:02:53.083717   31239 main.go:141] libmachine: (ha-694782-m04) Calling .Stop
	I0416 00:02:53.086897   31239 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 0/120
	I0416 00:02:54.345426   31239 main.go:141] libmachine: (ha-694782-m04) Calling .GetState
	I0416 00:02:54.346772   31239 main.go:141] libmachine: Machine "ha-694782-m04" was stopped.
	I0416 00:02:54.346788   31239 stop.go:75] duration metric: took 1.457297598s to stop
	I0416 00:02:54.346804   31239 stop.go:39] StopHost: ha-694782-m03
	I0416 00:02:54.347130   31239 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:02:54.347175   31239 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:02:54.361691   31239 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44239
	I0416 00:02:54.362118   31239 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:02:54.362576   31239 main.go:141] libmachine: Using API Version  1
	I0416 00:02:54.362597   31239 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:02:54.362943   31239 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:02:54.365006   31239 out.go:177] * Stopping node "ha-694782-m03"  ...
	I0416 00:02:54.366250   31239 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0416 00:02:54.366273   31239 main.go:141] libmachine: (ha-694782-m03) Calling .DriverName
	I0416 00:02:54.366486   31239 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0416 00:02:54.366506   31239 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHHostname
	I0416 00:02:54.369371   31239 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:54.369875   31239 main.go:141] libmachine: (ha-694782-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fc:a7:e5", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:57:21 +0000 UTC Type:0 Mac:52:54:00:fc:a7:e5 Iaid: IPaddr:192.168.39.202 Prefix:24 Hostname:ha-694782-m03 Clientid:01:52:54:00:fc:a7:e5}
	I0416 00:02:54.369915   31239 main.go:141] libmachine: (ha-694782-m03) DBG | domain ha-694782-m03 has defined IP address 192.168.39.202 and MAC address 52:54:00:fc:a7:e5 in network mk-ha-694782
	I0416 00:02:54.370020   31239 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHPort
	I0416 00:02:54.370201   31239 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHKeyPath
	I0416 00:02:54.370341   31239 main.go:141] libmachine: (ha-694782-m03) Calling .GetSSHUsername
	I0416 00:02:54.370453   31239 sshutil.go:53] new ssh client: &{IP:192.168.39.202 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m03/id_rsa Username:docker}
	I0416 00:02:54.453327   31239 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0416 00:02:54.508125   31239 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0416 00:02:54.563523   31239 main.go:141] libmachine: Stopping "ha-694782-m03"...
	I0416 00:02:54.563553   31239 main.go:141] libmachine: (ha-694782-m03) Calling .GetState
	I0416 00:02:54.565075   31239 main.go:141] libmachine: (ha-694782-m03) Calling .Stop
	I0416 00:02:54.568304   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 0/120
	I0416 00:02:55.569925   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 1/120
	I0416 00:02:56.571120   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 2/120
	I0416 00:02:57.572482   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 3/120
	I0416 00:02:58.573794   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 4/120
	I0416 00:02:59.575440   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 5/120
	I0416 00:03:00.576727   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 6/120
	I0416 00:03:01.578112   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 7/120
	I0416 00:03:02.579709   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 8/120
	I0416 00:03:03.581448   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 9/120
	I0416 00:03:04.583551   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 10/120
	I0416 00:03:05.584851   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 11/120
	I0416 00:03:06.586356   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 12/120
	I0416 00:03:07.587696   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 13/120
	I0416 00:03:08.589195   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 14/120
	I0416 00:03:09.590951   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 15/120
	I0416 00:03:10.592528   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 16/120
	I0416 00:03:11.593939   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 17/120
	I0416 00:03:12.595797   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 18/120
	I0416 00:03:13.597151   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 19/120
	I0416 00:03:14.599001   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 20/120
	I0416 00:03:15.600512   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 21/120
	I0416 00:03:16.602030   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 22/120
	I0416 00:03:17.603481   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 23/120
	I0416 00:03:18.604831   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 24/120
	I0416 00:03:19.607110   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 25/120
	I0416 00:03:20.608423   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 26/120
	I0416 00:03:21.609807   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 27/120
	I0416 00:03:22.611265   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 28/120
	I0416 00:03:23.612793   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 29/120
	I0416 00:03:24.614791   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 30/120
	I0416 00:03:25.616427   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 31/120
	I0416 00:03:26.617932   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 32/120
	I0416 00:03:27.619430   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 33/120
	I0416 00:03:28.620683   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 34/120
	I0416 00:03:29.622500   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 35/120
	I0416 00:03:30.623668   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 36/120
	I0416 00:03:31.625023   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 37/120
	I0416 00:03:32.626370   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 38/120
	I0416 00:03:33.627955   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 39/120
	I0416 00:03:34.629670   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 40/120
	I0416 00:03:35.630899   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 41/120
	I0416 00:03:36.632041   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 42/120
	I0416 00:03:37.633231   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 43/120
	I0416 00:03:38.634722   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 44/120
	I0416 00:03:39.636440   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 45/120
	I0416 00:03:40.637790   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 46/120
	I0416 00:03:41.638930   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 47/120
	I0416 00:03:42.640303   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 48/120
	I0416 00:03:43.641772   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 49/120
	I0416 00:03:44.643545   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 50/120
	I0416 00:03:45.644891   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 51/120
	I0416 00:03:46.646443   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 52/120
	I0416 00:03:47.647633   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 53/120
	I0416 00:03:48.648863   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 54/120
	I0416 00:03:49.650483   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 55/120
	I0416 00:03:50.651890   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 56/120
	I0416 00:03:51.653286   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 57/120
	I0416 00:03:52.654580   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 58/120
	I0416 00:03:53.655886   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 59/120
	I0416 00:03:54.657573   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 60/120
	I0416 00:03:55.659547   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 61/120
	I0416 00:03:56.660840   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 62/120
	I0416 00:03:57.662032   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 63/120
	I0416 00:03:58.663298   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 64/120
	I0416 00:03:59.665446   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 65/120
	I0416 00:04:00.666846   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 66/120
	I0416 00:04:01.668081   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 67/120
	I0416 00:04:02.669467   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 68/120
	I0416 00:04:03.670809   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 69/120
	I0416 00:04:04.672632   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 70/120
	I0416 00:04:05.673893   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 71/120
	I0416 00:04:06.675305   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 72/120
	I0416 00:04:07.676566   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 73/120
	I0416 00:04:08.677844   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 74/120
	I0416 00:04:09.679492   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 75/120
	I0416 00:04:10.681667   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 76/120
	I0416 00:04:11.682888   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 77/120
	I0416 00:04:12.684099   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 78/120
	I0416 00:04:13.685435   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 79/120
	I0416 00:04:14.687084   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 80/120
	I0416 00:04:15.688344   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 81/120
	I0416 00:04:16.689799   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 82/120
	I0416 00:04:17.691076   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 83/120
	I0416 00:04:18.692352   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 84/120
	I0416 00:04:19.693953   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 85/120
	I0416 00:04:20.695624   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 86/120
	I0416 00:04:21.697174   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 87/120
	I0416 00:04:22.698478   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 88/120
	I0416 00:04:23.699824   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 89/120
	I0416 00:04:24.701601   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 90/120
	I0416 00:04:25.703395   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 91/120
	I0416 00:04:26.704912   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 92/120
	I0416 00:04:27.706333   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 93/120
	I0416 00:04:28.707514   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 94/120
	I0416 00:04:29.709654   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 95/120
	I0416 00:04:30.711191   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 96/120
	I0416 00:04:31.712365   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 97/120
	I0416 00:04:32.713788   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 98/120
	I0416 00:04:33.715067   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 99/120
	I0416 00:04:34.716656   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 100/120
	I0416 00:04:35.718059   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 101/120
	I0416 00:04:36.719314   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 102/120
	I0416 00:04:37.720772   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 103/120
	I0416 00:04:38.722122   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 104/120
	I0416 00:04:39.723921   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 105/120
	I0416 00:04:40.725120   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 106/120
	I0416 00:04:41.726334   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 107/120
	I0416 00:04:42.727673   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 108/120
	I0416 00:04:43.728874   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 109/120
	I0416 00:04:44.730595   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 110/120
	I0416 00:04:45.731951   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 111/120
	I0416 00:04:46.733066   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 112/120
	I0416 00:04:47.734608   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 113/120
	I0416 00:04:48.735846   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 114/120
	I0416 00:04:49.737287   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 115/120
	I0416 00:04:50.738569   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 116/120
	I0416 00:04:51.739717   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 117/120
	I0416 00:04:52.741057   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 118/120
	I0416 00:04:53.742254   31239 main.go:141] libmachine: (ha-694782-m03) Waiting for machine to stop 119/120
	I0416 00:04:54.742954   31239 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0416 00:04:54.743015   31239 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0416 00:04:54.745233   31239 out.go:177] 
	W0416 00:04:54.746941   31239 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0416 00:04:54.746980   31239 out.go:239] * 
	* 
	W0416 00:04:54.749029   31239 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 00:04:54.750474   31239 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-694782 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-694782 --wait=true -v=7 --alsologtostderr
E0416 00:07:20.169727   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0416 00:08:43.217302   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0416 00:08:58.680343   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-694782 --wait=true -v=7 --alsologtostderr: (4m7.057011096s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-694782
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-694782 -n ha-694782
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-694782 logs -n 25: (2.03298258s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| cp      | ha-694782 cp ha-694782-m03:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m02:/home/docker/cp-test_ha-694782-m03_ha-694782-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782-m02 sudo cat                                          | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m03_ha-694782-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m03:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04:/home/docker/cp-test_ha-694782-m03_ha-694782-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782-m04 sudo cat                                          | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m03_ha-694782-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-694782 cp testdata/cp-test.txt                                                | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4178900617/001/cp-test_ha-694782-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782:/home/docker/cp-test_ha-694782-m04_ha-694782.txt                       |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782 sudo cat                                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m04_ha-694782.txt                                 |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m02:/home/docker/cp-test_ha-694782-m04_ha-694782-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782-m02 sudo cat                                          | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m04_ha-694782-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m03:/home/docker/cp-test_ha-694782-m04_ha-694782-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782-m03 sudo cat                                          | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m04_ha-694782-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-694782 node stop m02 -v=7                                                     | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-694782 node start m02 -v=7                                                    | ha-694782 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-694782 -v=7                                                           | ha-694782 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | -p ha-694782 -v=7                                                                | ha-694782 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| start   | -p ha-694782 --wait=true -v=7                                                    | ha-694782 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:04 UTC | 16 Apr 24 00:09 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-694782                                                                | ha-694782 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:09 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 00:04:54
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 00:04:54.807417   31753 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:04:54.807572   31753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:04:54.807584   31753 out.go:304] Setting ErrFile to fd 2...
	I0416 00:04:54.807590   31753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:04:54.807788   31753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:04:54.808329   31753 out.go:298] Setting JSON to false
	I0416 00:04:54.809228   31753 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2839,"bootTime":1713223056,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 00:04:54.809286   31753 start.go:139] virtualization: kvm guest
	I0416 00:04:54.811503   31753 out.go:177] * [ha-694782] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 00:04:54.812707   31753 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 00:04:54.814066   31753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 00:04:54.812740   31753 notify.go:220] Checking for updates...
	I0416 00:04:54.816456   31753 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 00:04:54.817744   31753 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:04:54.819176   31753 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 00:04:54.820588   31753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 00:04:54.822324   31753 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:04:54.822435   31753 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 00:04:54.822850   31753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:04:54.822905   31753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:04:54.837073   31753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44757
	I0416 00:04:54.837563   31753 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:04:54.838143   31753 main.go:141] libmachine: Using API Version  1
	I0416 00:04:54.838164   31753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:04:54.838532   31753 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:04:54.838738   31753 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:04:54.871316   31753 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 00:04:54.872574   31753 start.go:297] selected driver: kvm2
	I0416 00:04:54.872587   31753 start.go:901] validating driver "kvm2" against &{Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.42 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.107 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:04:54.872730   31753 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 00:04:54.873019   31753 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:04:54.873073   31753 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 00:04:54.887722   31753 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 00:04:54.888528   31753 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 00:04:54.888598   31753 cni.go:84] Creating CNI manager for ""
	I0416 00:04:54.888613   31753 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0416 00:04:54.888677   31753 start.go:340] cluster config:
	{Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.42 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.107 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:04:54.888808   31753 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:04:54.891216   31753 out.go:177] * Starting "ha-694782" primary control-plane node in "ha-694782" cluster
	I0416 00:04:54.892323   31753 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 00:04:54.892356   31753 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 00:04:54.892368   31753 cache.go:56] Caching tarball of preloaded images
	I0416 00:04:54.892447   31753 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 00:04:54.892460   31753 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 00:04:54.892602   31753 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0416 00:04:54.892796   31753 start.go:360] acquireMachinesLock for ha-694782: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:04:54.892848   31753 start.go:364] duration metric: took 34.063µs to acquireMachinesLock for "ha-694782"
	I0416 00:04:54.892869   31753 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:04:54.892886   31753 fix.go:54] fixHost starting: 
	I0416 00:04:54.893223   31753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:04:54.893263   31753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:04:54.906850   31753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0416 00:04:54.907187   31753 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:04:54.907627   31753 main.go:141] libmachine: Using API Version  1
	I0416 00:04:54.907647   31753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:04:54.907909   31753 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:04:54.908136   31753 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:04:54.908284   31753 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0416 00:04:54.909694   31753 fix.go:112] recreateIfNeeded on ha-694782: state=Running err=<nil>
	W0416 00:04:54.909722   31753 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:04:54.911499   31753 out.go:177] * Updating the running kvm2 "ha-694782" VM ...
	I0416 00:04:54.912784   31753 machine.go:94] provisionDockerMachine start ...
	I0416 00:04:54.912804   31753 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:04:54.912975   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:04:54.915096   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:54.915496   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:04:54.915526   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:54.915625   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:04:54.915779   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:04:54.915909   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:04:54.916016   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:04:54.916117   31753 main.go:141] libmachine: Using SSH client type: native
	I0416 00:04:54.916316   31753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0416 00:04:54.916329   31753 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:04:55.034364   31753 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-694782
	
	I0416 00:04:55.034400   31753 main.go:141] libmachine: (ha-694782) Calling .GetMachineName
	I0416 00:04:55.034647   31753 buildroot.go:166] provisioning hostname "ha-694782"
	I0416 00:04:55.034669   31753 main.go:141] libmachine: (ha-694782) Calling .GetMachineName
	I0416 00:04:55.034837   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:04:55.037483   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.037867   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:04:55.037890   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.038124   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:04:55.038291   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:04:55.038478   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:04:55.038650   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:04:55.038812   31753 main.go:141] libmachine: Using SSH client type: native
	I0416 00:04:55.039036   31753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0416 00:04:55.039053   31753 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-694782 && echo "ha-694782" | sudo tee /etc/hostname
	I0416 00:04:55.172006   31753 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-694782
	
	I0416 00:04:55.172032   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:04:55.174734   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.175136   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:04:55.175171   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.175370   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:04:55.175560   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:04:55.175693   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:04:55.175835   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:04:55.176006   31753 main.go:141] libmachine: Using SSH client type: native
	I0416 00:04:55.176213   31753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0416 00:04:55.176232   31753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-694782' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-694782/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-694782' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:04:55.290147   31753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:04:55.290180   31753 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:04:55.290211   31753 buildroot.go:174] setting up certificates
	I0416 00:04:55.290219   31753 provision.go:84] configureAuth start
	I0416 00:04:55.290228   31753 main.go:141] libmachine: (ha-694782) Calling .GetMachineName
	I0416 00:04:55.290471   31753 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0416 00:04:55.293093   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.293461   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:04:55.293489   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.293618   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:04:55.295806   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.296142   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:04:55.296181   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.296307   31753 provision.go:143] copyHostCerts
	I0416 00:04:55.296337   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:04:55.296384   31753 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:04:55.296397   31753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:04:55.296475   31753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:04:55.296586   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:04:55.296613   31753 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:04:55.296622   31753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:04:55.296658   31753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:04:55.296731   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:04:55.296755   31753 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:04:55.296764   31753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:04:55.296798   31753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:04:55.296878   31753 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.ha-694782 san=[127.0.0.1 192.168.39.41 ha-694782 localhost minikube]
	I0416 00:04:55.464849   31753 provision.go:177] copyRemoteCerts
	I0416 00:04:55.464903   31753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:04:55.464923   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:04:55.467721   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.468085   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:04:55.468116   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.468253   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:04:55.468440   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:04:55.468566   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:04:55.468700   31753 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0416 00:04:55.555647   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0416 00:04:55.555736   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:04:55.583491   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0416 00:04:55.583576   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0416 00:04:55.614128   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0416 00:04:55.614209   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 00:04:55.644199   31753 provision.go:87] duration metric: took 353.966861ms to configureAuth
	I0416 00:04:55.644228   31753 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:04:55.644487   31753 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:04:55.644561   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:04:55.647183   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.647530   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:04:55.647555   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.647752   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:04:55.647980   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:04:55.648191   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:04:55.648345   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:04:55.648522   31753 main.go:141] libmachine: Using SSH client type: native
	I0416 00:04:55.648716   31753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0416 00:04:55.648737   31753 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:06:26.451540   31753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:06:26.451566   31753 machine.go:97] duration metric: took 1m31.538769373s to provisionDockerMachine
	I0416 00:06:26.451581   31753 start.go:293] postStartSetup for "ha-694782" (driver="kvm2")
	I0416 00:06:26.451593   31753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:06:26.451606   31753 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:06:26.451946   31753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:06:26.451973   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:06:26.455053   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.455532   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:06:26.455559   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.455695   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:06:26.455841   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:06:26.456013   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:06:26.456137   31753 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0416 00:06:26.544716   31753 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:06:26.548847   31753 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:06:26.548865   31753 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:06:26.548914   31753 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:06:26.548997   31753 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:06:26.549008   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /etc/ssl/certs/148972.pem
	I0416 00:06:26.549083   31753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:06:26.558816   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:06:26.584188   31753 start.go:296] duration metric: took 132.595048ms for postStartSetup
	I0416 00:06:26.584234   31753 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:06:26.584494   31753 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0416 00:06:26.584515   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:06:26.586939   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.587279   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:06:26.587308   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.587423   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:06:26.587588   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:06:26.587734   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:06:26.587940   31753 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	W0416 00:06:26.674851   31753 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0416 00:06:26.674904   31753 fix.go:56] duration metric: took 1m31.782025943s for fixHost
	I0416 00:06:26.674924   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:06:26.677495   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.677807   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:06:26.677835   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.677968   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:06:26.678166   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:06:26.678355   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:06:26.678501   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:06:26.678640   31753 main.go:141] libmachine: Using SSH client type: native
	I0416 00:06:26.678787   31753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0416 00:06:26.678796   31753 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 00:06:26.793908   31753 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713225986.763318056
	
	I0416 00:06:26.793933   31753 fix.go:216] guest clock: 1713225986.763318056
	I0416 00:06:26.793940   31753 fix.go:229] Guest: 2024-04-16 00:06:26.763318056 +0000 UTC Remote: 2024-04-16 00:06:26.674911471 +0000 UTC m=+91.914140071 (delta=88.406585ms)
	I0416 00:06:26.793987   31753 fix.go:200] guest clock delta is within tolerance: 88.406585ms
	I0416 00:06:26.793996   31753 start.go:83] releasing machines lock for "ha-694782", held for 1m31.901135118s
	I0416 00:06:26.794018   31753 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:06:26.794225   31753 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0416 00:06:26.796875   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.797276   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:06:26.797294   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.797482   31753 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:06:26.797949   31753 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:06:26.798119   31753 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:06:26.798206   31753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:06:26.798243   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:06:26.798301   31753 ssh_runner.go:195] Run: cat /version.json
	I0416 00:06:26.798325   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:06:26.800720   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.800973   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.801088   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:06:26.801118   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.801259   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:06:26.801407   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:06:26.801425   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:06:26.801432   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.801584   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:06:26.801738   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:06:26.801740   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:06:26.801888   31753 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0416 00:06:26.801955   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:06:26.802079   31753 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0416 00:06:26.915967   31753 ssh_runner.go:195] Run: systemctl --version
	I0416 00:06:26.922503   31753 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:06:27.088263   31753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 00:06:27.095257   31753 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:06:27.095319   31753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:06:27.104578   31753 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0416 00:06:27.104599   31753 start.go:494] detecting cgroup driver to use...
	I0416 00:06:27.104655   31753 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:06:27.122854   31753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:06:27.136982   31753 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:06:27.137040   31753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:06:27.150356   31753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:06:27.163503   31753 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:06:27.321079   31753 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 00:06:27.474913   31753 docker.go:233] disabling docker service ...
	I0416 00:06:27.474992   31753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 00:06:27.493116   31753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 00:06:27.508239   31753 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 00:06:27.660093   31753 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 00:06:27.812356   31753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 00:06:27.826319   31753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 00:06:27.846781   31753 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 00:06:27.846841   31753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:06:27.857571   31753 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 00:06:27.857620   31753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:06:27.867731   31753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:06:27.877749   31753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:06:27.888673   31753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 00:06:27.899177   31753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:06:27.909720   31753 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:06:27.922116   31753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:06:27.932879   31753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 00:06:27.942369   31753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 00:06:27.951866   31753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:06:28.095931   31753 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 00:06:33.900249   31753 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.804287471s)
	I0416 00:06:33.900288   31753 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 00:06:33.900342   31753 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 00:06:33.905375   31753 start.go:562] Will wait 60s for crictl version
	I0416 00:06:33.905430   31753 ssh_runner.go:195] Run: which crictl
	I0416 00:06:33.909251   31753 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 00:06:33.948496   31753 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 00:06:33.948567   31753 ssh_runner.go:195] Run: crio --version
	I0416 00:06:33.978383   31753 ssh_runner.go:195] Run: crio --version
	I0416 00:06:34.009763   31753 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 00:06:34.011090   31753 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0416 00:06:34.013809   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:34.014185   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:06:34.014212   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:34.014428   31753 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 00:06:34.019233   31753 kubeadm.go:877] updating cluster {Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.42 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.107 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 00:06:34.019358   31753 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 00:06:34.019410   31753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:06:34.065997   31753 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 00:06:34.066016   31753 crio.go:433] Images already preloaded, skipping extraction
	I0416 00:06:34.066068   31753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:06:34.100404   31753 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 00:06:34.100421   31753 cache_images.go:84] Images are preloaded, skipping loading
	I0416 00:06:34.100429   31753 kubeadm.go:928] updating node { 192.168.39.41 8443 v1.29.3 crio true true} ...
	I0416 00:06:34.100516   31753 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-694782 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 00:06:34.100575   31753 ssh_runner.go:195] Run: crio config
	I0416 00:06:34.147863   31753 cni.go:84] Creating CNI manager for ""
	I0416 00:06:34.147893   31753 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0416 00:06:34.147902   31753 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 00:06:34.147921   31753 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.41 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-694782 NodeName:ha-694782 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 00:06:34.148047   31753 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-694782"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.41
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.41"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 00:06:34.148069   31753 kube-vip.go:111] generating kube-vip config ...
	I0416 00:06:34.148109   31753 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 00:06:34.160356   31753 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 00:06:34.160456   31753 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0416 00:06:34.160505   31753 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 00:06:34.170046   31753 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 00:06:34.170105   31753 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0416 00:06:34.180299   31753 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0416 00:06:34.197054   31753 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 00:06:34.214106   31753 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0416 00:06:34.231822   31753 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0416 00:06:34.250349   31753 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0416 00:06:34.254629   31753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:06:34.401374   31753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 00:06:34.416310   31753 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782 for IP: 192.168.39.41
	I0416 00:06:34.416328   31753 certs.go:194] generating shared ca certs ...
	I0416 00:06:34.416347   31753 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:06:34.416538   31753 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 00:06:34.416595   31753 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 00:06:34.416609   31753 certs.go:256] generating profile certs ...
	I0416 00:06:34.416685   31753 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.key
	I0416 00:06:34.416719   31753 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.b4243877
	I0416 00:06:34.416743   31753 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.b4243877 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.41 192.168.39.42 192.168.39.202 192.168.39.254]
	I0416 00:06:34.484701   31753 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.b4243877 ...
	I0416 00:06:34.484739   31753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.b4243877: {Name:mkb0baec2c01d8c82f7217ea6fcb92d550314c3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:06:34.484925   31753 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.b4243877 ...
	I0416 00:06:34.484947   31753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.b4243877: {Name:mkfaa52d47977a253daa734467f979e5cee152ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:06:34.485040   31753 certs.go:381] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.b4243877 -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt
	I0416 00:06:34.485231   31753 certs.go:385] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.b4243877 -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key
	I0416 00:06:34.485407   31753 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key
	I0416 00:06:34.485426   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 00:06:34.485444   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0416 00:06:34.485469   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 00:06:34.485497   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 00:06:34.485516   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 00:06:34.485534   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 00:06:34.485556   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 00:06:34.485571   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 00:06:34.485634   31753 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 00:06:34.485678   31753 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 00:06:34.485691   31753 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 00:06:34.485722   31753 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 00:06:34.485753   31753 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 00:06:34.485784   31753 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 00:06:34.485846   31753 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:06:34.485883   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:06:34.485903   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem -> /usr/share/ca-certificates/14897.pem
	I0416 00:06:34.485921   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /usr/share/ca-certificates/148972.pem
	I0416 00:06:34.486550   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 00:06:34.515086   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 00:06:34.539901   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 00:06:34.564024   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 00:06:34.588241   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0416 00:06:34.614591   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 00:06:34.641748   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 00:06:34.667735   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 00:06:34.693132   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 00:06:34.718199   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 00:06:34.743208   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 00:06:34.769137   31753 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 00:06:34.787125   31753 ssh_runner.go:195] Run: openssl version
	I0416 00:06:34.793338   31753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 00:06:34.804574   31753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 00:06:34.809228   31753 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 00:06:34.810908   31753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 00:06:34.816840   31753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 00:06:34.826627   31753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 00:06:34.837683   31753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:06:34.842293   31753 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:06:34.842345   31753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:06:34.848075   31753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 00:06:34.861943   31753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 00:06:34.877504   31753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 00:06:34.882127   31753 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 00:06:34.882173   31753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 00:06:34.887897   31753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 00:06:34.898056   31753 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 00:06:34.902635   31753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 00:06:34.908357   31753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 00:06:34.914115   31753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 00:06:34.919986   31753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 00:06:34.925930   31753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 00:06:34.931802   31753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 00:06:34.937571   31753 kubeadm.go:391] StartCluster: {Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.42 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.107 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:06:34.937694   31753 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 00:06:34.937756   31753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:06:34.982315   31753 cri.go:89] found id: "ca69ec229237f7c2dc9419d3b53e88f4383563c5ad1fc83604d6f4430c5b603c"
	I0416 00:06:34.982346   31753 cri.go:89] found id: "7eccbdc2e01dcc65a90d218b473d663e277b7a46b0958abdd3ba20c433158705"
	I0416 00:06:34.982350   31753 cri.go:89] found id: "02481da89a7ad84df32e1762276a5ead48145e5ce75080ed1b392d2b2a7d445a"
	I0416 00:06:34.982353   31753 cri.go:89] found id: "b4afee0fee237dddcd3e9aaba26c488c038e8b1cfa73874e245e6dbcdf46e387"
	I0416 00:06:34.982355   31753 cri.go:89] found id: "b3559e8687b295e6fdcb60e5e4050bd11f66bbe0eadc1cb26920a345a5ac4764"
	I0416 00:06:34.982359   31753 cri.go:89] found id: "a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d"
	I0416 00:06:34.982361   31753 cri.go:89] found id: "b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2"
	I0416 00:06:34.982363   31753 cri.go:89] found id: "b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7"
	I0416 00:06:34.982369   31753 cri.go:89] found id: "9f8c32adffdfe920d33a3d2aadb4e5d70c83321d5e0ed04b5e651b3338f8868c"
	I0416 00:06:34.982374   31753 cri.go:89] found id: "9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5"
	I0416 00:06:34.982377   31753 cri.go:89] found id: "7d4ea2215ec6217956d87feb4c68ad8ace3136456a7bc720dcc7c721b87f66f4"
	I0416 00:06:34.982379   31753 cri.go:89] found id: "553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf"
	I0416 00:06:34.982382   31753 cri.go:89] found id: "8a682dce5ef12dd6c80bdebd8ef67c034ebd1c88d5e144fc177805ad5eb35efe"
	I0416 00:06:34.982384   31753 cri.go:89] found id: ""
	I0416 00:06:34.982431   31753 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.592256460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713226142592094198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aee3a2a7-33da-466d-9850-35ca8b2f385a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.592697460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cba4d6f6-2f54-40c6-85e5-55262abc000b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.592784795Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cba4d6f6-2f54-40c6-85e5-55262abc000b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.593229216Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ab5ccb1b4f0e42bd80678c58d3330ba78f59d6acdea375ddb8487b94a3e557,PodSandboxId:16a1fcad619fb9858cf95428849d64e4b460a64124801b7c095f3dd616487edf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713226058599295462,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6b524648067e74a7474e617616cbf07b5cfa3884641ba982cdebdcb006d1b1,PodSandboxId:cdde1b0476e6b45e76f6c17e08a42915034ff2a2f13f84edd1d92298383a2f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713226044612150370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc7945e20ade550a66bb998ef22646661865ad45284f188ba6e1587fb32a4e40,PodSandboxId:f1a7ff92a40ef7366733dae31aa2e8b15f0ca073c2394c0594fab15f62dd18c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713226043600969243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f77c6b5d4b2208fa6eea38d1d96605adfcd20dd6066bc818ecf7fbfd5ce64a4,PodSandboxId:0e9faf34ef3728d1ac4db2d248157df5089aa59030cac0473d3269344359b179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713226040598038651,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fbcafa65cad837018bf187403310acf5ab60914a8243b0fd7d1f11db5749bd8,PodSandboxId:405c8eb6fa3ddbc512c8b2a9149eab41316500a745f750fb93e6bce5c1fcf398,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713226035006122203,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kubernetes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ece67cb0b89e483776ea5bea240e0ad3b4df15e4f3a9e4304627cf09c9fb73e,PodSandboxId:59855c2bfcb6823485252892145180c20c60375407f722d38125b292c621593e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713226018272055529,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11d49cc4234c7987e40c6a010ebfc82b,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:6cbda3b869057546eceda5804afdc7ab4b4ae44d5b847cbab9b96c00cf8783c9,PodSandboxId:f1a7ff92a40ef7366733dae31aa2e8b15f0ca073c2394c0594fab15f62dd18c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713226001397069475,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:3c5fc74cb4adad4166f2600954b0cdf2132c9dfab6bcebaa4ca82a828c264cc9,PodSandboxId:77394139e635e45b1747011d9ef79e2bfa982d467132852bf67c413000543289,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713226001455371861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a812f
61cab74eac095d0eaf25f044bcd3c0b40542f648850ed3358c4a9cf07,PodSandboxId:b88cd0e11b61fcb7e273bbae9b9f813e3e9a633a535dab171fd701ed953a2d7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713226001471393373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb07b1996f5bc461dcd7a9620a8
1b1e6c8ba652def7f66ead633fda77b4af08,PodSandboxId:633279be0125a119026ccc5953e99f34930a10d0a33d756cef37e4121d3a58f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713226001354706944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cecd296d2e7c995524a52118c632cabb6ea81bed6f1db33f69f2986ef86204d,PodSandboxId:c
dde1b0476e6b45e76f6c17e08a42915034ff2a2f13f84edd1d92298383a2f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713226001256489698,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b6c0ffbd95d979207c1b762faf871abef4bb7cf28b4e6d08021db0172f9168,PodSandbo
xId:0e9faf34ef3728d1ac4db2d248157df5089aa59030cac0473d3269344359b179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713226001225156029,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5916fb4a53a136bf90f5630e11363bf461e58fe804cb1e286d54ad6d31f93c96,PodSandboxId:16a1fcad619fb9858c
f95428849d64e4b460a64124801b7c095f3dd616487edf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713225996694034889,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca994edd19ee7bf77418af14b91ce946c17574b835c117e41bfd965b7c92ac5,PodSandboxId:0ec9ed06a0fb4e24a1215d14da18e280badfceceb20bcbcb8b905
ed30afe614a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225996450208724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64f8280e0c729e39a8b27c9117de8b73b1b7d2999de6a4769f1c577176f16f8,PodSandboxId:a371dbf543077fc460f104739a29dc30a26b86c08db4838ebda547bf3d5b5d72,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225996397214193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10abaa8fc3a416f4f6e6af525fcc65e0613ea769d731660a81e4e6a425fa4d6c,PodSandboxId:df7bc8cc3af912521d7dab8c802c0b04f7447ccb3d192040071875ff6a6ed89d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713225502012625031,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kuber
netes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d,PodSandboxId:773aba8a13222bacf0c0e79c78ec31764b5af16b9bc416140f303b36465cce2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713225349861459156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2,PodSandboxId:cc571f90808ddcdef413b709640e27f67d9d861628a9d232886db9a496a57712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713225349825077174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7,PodSandboxId:f34915e87e4008b765d7b34d6619b29c22eddc157e2e96893518ff9709538560,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713225346210711538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5,PodSandboxId:41e04a0d8a0ba492c448f0c8d919cb86eb887cc0a8198d99815e7f7eed50b944,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713225326796884514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf,PodSandboxId:cc8f87bd6e0dc433462a51cd028d6d774aa14a6c762f3f6a79999daea3870547,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1713225326704620344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cba4d6f6-2f54-40c6-85e5-55262abc000b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.646451959Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=34e64785-48f9-4473-9559-70edf9c1e5c3 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.646543760Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=34e64785-48f9-4473-9559-70edf9c1e5c3 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.648194366Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=413ba732-639e-42b4-92d9-46942a37eb20 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.648800215Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713226142648761647,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=413ba732-639e-42b4-92d9-46942a37eb20 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.650366704Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=740c1664-92c7-44fd-bae2-bdfb09dddcfa name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.650441371Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=740c1664-92c7-44fd-bae2-bdfb09dddcfa name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.651137797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ab5ccb1b4f0e42bd80678c58d3330ba78f59d6acdea375ddb8487b94a3e557,PodSandboxId:16a1fcad619fb9858cf95428849d64e4b460a64124801b7c095f3dd616487edf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713226058599295462,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6b524648067e74a7474e617616cbf07b5cfa3884641ba982cdebdcb006d1b1,PodSandboxId:cdde1b0476e6b45e76f6c17e08a42915034ff2a2f13f84edd1d92298383a2f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713226044612150370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc7945e20ade550a66bb998ef22646661865ad45284f188ba6e1587fb32a4e40,PodSandboxId:f1a7ff92a40ef7366733dae31aa2e8b15f0ca073c2394c0594fab15f62dd18c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713226043600969243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f77c6b5d4b2208fa6eea38d1d96605adfcd20dd6066bc818ecf7fbfd5ce64a4,PodSandboxId:0e9faf34ef3728d1ac4db2d248157df5089aa59030cac0473d3269344359b179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713226040598038651,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fbcafa65cad837018bf187403310acf5ab60914a8243b0fd7d1f11db5749bd8,PodSandboxId:405c8eb6fa3ddbc512c8b2a9149eab41316500a745f750fb93e6bce5c1fcf398,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713226035006122203,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kubernetes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ece67cb0b89e483776ea5bea240e0ad3b4df15e4f3a9e4304627cf09c9fb73e,PodSandboxId:59855c2bfcb6823485252892145180c20c60375407f722d38125b292c621593e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713226018272055529,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11d49cc4234c7987e40c6a010ebfc82b,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:6cbda3b869057546eceda5804afdc7ab4b4ae44d5b847cbab9b96c00cf8783c9,PodSandboxId:f1a7ff92a40ef7366733dae31aa2e8b15f0ca073c2394c0594fab15f62dd18c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713226001397069475,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:3c5fc74cb4adad4166f2600954b0cdf2132c9dfab6bcebaa4ca82a828c264cc9,PodSandboxId:77394139e635e45b1747011d9ef79e2bfa982d467132852bf67c413000543289,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713226001455371861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a812f
61cab74eac095d0eaf25f044bcd3c0b40542f648850ed3358c4a9cf07,PodSandboxId:b88cd0e11b61fcb7e273bbae9b9f813e3e9a633a535dab171fd701ed953a2d7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713226001471393373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb07b1996f5bc461dcd7a9620a8
1b1e6c8ba652def7f66ead633fda77b4af08,PodSandboxId:633279be0125a119026ccc5953e99f34930a10d0a33d756cef37e4121d3a58f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713226001354706944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cecd296d2e7c995524a52118c632cabb6ea81bed6f1db33f69f2986ef86204d,PodSandboxId:c
dde1b0476e6b45e76f6c17e08a42915034ff2a2f13f84edd1d92298383a2f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713226001256489698,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b6c0ffbd95d979207c1b762faf871abef4bb7cf28b4e6d08021db0172f9168,PodSandbo
xId:0e9faf34ef3728d1ac4db2d248157df5089aa59030cac0473d3269344359b179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713226001225156029,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5916fb4a53a136bf90f5630e11363bf461e58fe804cb1e286d54ad6d31f93c96,PodSandboxId:16a1fcad619fb9858c
f95428849d64e4b460a64124801b7c095f3dd616487edf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713225996694034889,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca994edd19ee7bf77418af14b91ce946c17574b835c117e41bfd965b7c92ac5,PodSandboxId:0ec9ed06a0fb4e24a1215d14da18e280badfceceb20bcbcb8b905
ed30afe614a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225996450208724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64f8280e0c729e39a8b27c9117de8b73b1b7d2999de6a4769f1c577176f16f8,PodSandboxId:a371dbf543077fc460f104739a29dc30a26b86c08db4838ebda547bf3d5b5d72,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225996397214193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10abaa8fc3a416f4f6e6af525fcc65e0613ea769d731660a81e4e6a425fa4d6c,PodSandboxId:df7bc8cc3af912521d7dab8c802c0b04f7447ccb3d192040071875ff6a6ed89d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713225502012625031,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kuber
netes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d,PodSandboxId:773aba8a13222bacf0c0e79c78ec31764b5af16b9bc416140f303b36465cce2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713225349861459156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2,PodSandboxId:cc571f90808ddcdef413b709640e27f67d9d861628a9d232886db9a496a57712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713225349825077174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7,PodSandboxId:f34915e87e4008b765d7b34d6619b29c22eddc157e2e96893518ff9709538560,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713225346210711538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5,PodSandboxId:41e04a0d8a0ba492c448f0c8d919cb86eb887cc0a8198d99815e7f7eed50b944,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713225326796884514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf,PodSandboxId:cc8f87bd6e0dc433462a51cd028d6d774aa14a6c762f3f6a79999daea3870547,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1713225326704620344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=740c1664-92c7-44fd-bae2-bdfb09dddcfa name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.694909346Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=31f861f9-42f2-422e-ba3b-1986abe5f31b name=/runtime.v1.RuntimeService/Version
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.695052449Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=31f861f9-42f2-422e-ba3b-1986abe5f31b name=/runtime.v1.RuntimeService/Version
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.696666201Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29550867-e3a8-4794-9a67-62fc9fad10ec name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.697150927Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713226142697123065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29550867-e3a8-4794-9a67-62fc9fad10ec name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.698055927Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d32863c-0b99-49ce-bfbd-cad9fddc2ff4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.698160478Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d32863c-0b99-49ce-bfbd-cad9fddc2ff4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.698579762Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ab5ccb1b4f0e42bd80678c58d3330ba78f59d6acdea375ddb8487b94a3e557,PodSandboxId:16a1fcad619fb9858cf95428849d64e4b460a64124801b7c095f3dd616487edf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713226058599295462,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6b524648067e74a7474e617616cbf07b5cfa3884641ba982cdebdcb006d1b1,PodSandboxId:cdde1b0476e6b45e76f6c17e08a42915034ff2a2f13f84edd1d92298383a2f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713226044612150370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc7945e20ade550a66bb998ef22646661865ad45284f188ba6e1587fb32a4e40,PodSandboxId:f1a7ff92a40ef7366733dae31aa2e8b15f0ca073c2394c0594fab15f62dd18c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713226043600969243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f77c6b5d4b2208fa6eea38d1d96605adfcd20dd6066bc818ecf7fbfd5ce64a4,PodSandboxId:0e9faf34ef3728d1ac4db2d248157df5089aa59030cac0473d3269344359b179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713226040598038651,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fbcafa65cad837018bf187403310acf5ab60914a8243b0fd7d1f11db5749bd8,PodSandboxId:405c8eb6fa3ddbc512c8b2a9149eab41316500a745f750fb93e6bce5c1fcf398,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713226035006122203,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kubernetes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ece67cb0b89e483776ea5bea240e0ad3b4df15e4f3a9e4304627cf09c9fb73e,PodSandboxId:59855c2bfcb6823485252892145180c20c60375407f722d38125b292c621593e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713226018272055529,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11d49cc4234c7987e40c6a010ebfc82b,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:6cbda3b869057546eceda5804afdc7ab4b4ae44d5b847cbab9b96c00cf8783c9,PodSandboxId:f1a7ff92a40ef7366733dae31aa2e8b15f0ca073c2394c0594fab15f62dd18c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713226001397069475,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:3c5fc74cb4adad4166f2600954b0cdf2132c9dfab6bcebaa4ca82a828c264cc9,PodSandboxId:77394139e635e45b1747011d9ef79e2bfa982d467132852bf67c413000543289,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713226001455371861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a812f
61cab74eac095d0eaf25f044bcd3c0b40542f648850ed3358c4a9cf07,PodSandboxId:b88cd0e11b61fcb7e273bbae9b9f813e3e9a633a535dab171fd701ed953a2d7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713226001471393373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb07b1996f5bc461dcd7a9620a8
1b1e6c8ba652def7f66ead633fda77b4af08,PodSandboxId:633279be0125a119026ccc5953e99f34930a10d0a33d756cef37e4121d3a58f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713226001354706944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cecd296d2e7c995524a52118c632cabb6ea81bed6f1db33f69f2986ef86204d,PodSandboxId:c
dde1b0476e6b45e76f6c17e08a42915034ff2a2f13f84edd1d92298383a2f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713226001256489698,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b6c0ffbd95d979207c1b762faf871abef4bb7cf28b4e6d08021db0172f9168,PodSandbo
xId:0e9faf34ef3728d1ac4db2d248157df5089aa59030cac0473d3269344359b179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713226001225156029,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5916fb4a53a136bf90f5630e11363bf461e58fe804cb1e286d54ad6d31f93c96,PodSandboxId:16a1fcad619fb9858c
f95428849d64e4b460a64124801b7c095f3dd616487edf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713225996694034889,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca994edd19ee7bf77418af14b91ce946c17574b835c117e41bfd965b7c92ac5,PodSandboxId:0ec9ed06a0fb4e24a1215d14da18e280badfceceb20bcbcb8b905
ed30afe614a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225996450208724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64f8280e0c729e39a8b27c9117de8b73b1b7d2999de6a4769f1c577176f16f8,PodSandboxId:a371dbf543077fc460f104739a29dc30a26b86c08db4838ebda547bf3d5b5d72,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225996397214193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10abaa8fc3a416f4f6e6af525fcc65e0613ea769d731660a81e4e6a425fa4d6c,PodSandboxId:df7bc8cc3af912521d7dab8c802c0b04f7447ccb3d192040071875ff6a6ed89d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713225502012625031,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kuber
netes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d,PodSandboxId:773aba8a13222bacf0c0e79c78ec31764b5af16b9bc416140f303b36465cce2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713225349861459156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2,PodSandboxId:cc571f90808ddcdef413b709640e27f67d9d861628a9d232886db9a496a57712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713225349825077174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7,PodSandboxId:f34915e87e4008b765d7b34d6619b29c22eddc157e2e96893518ff9709538560,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713225346210711538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5,PodSandboxId:41e04a0d8a0ba492c448f0c8d919cb86eb887cc0a8198d99815e7f7eed50b944,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713225326796884514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf,PodSandboxId:cc8f87bd6e0dc433462a51cd028d6d774aa14a6c762f3f6a79999daea3870547,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1713225326704620344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d32863c-0b99-49ce-bfbd-cad9fddc2ff4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.746271357Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7d54c249-98b5-4ac7-b22d-bf7202b36fe0 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.746369667Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d54c249-98b5-4ac7-b22d-bf7202b36fe0 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.747833596Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1573075d-6f7d-4299-a5bf-d7fb07a55a93 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.748858149Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713226142748820589,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1573075d-6f7d-4299-a5bf-d7fb07a55a93 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.749482539Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=966c431a-82e6-4c23-a3a4-b3c9922ecd92 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.749541541Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=966c431a-82e6-4c23-a3a4-b3c9922ecd92 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:09:02 ha-694782 crio[3829]: time="2024-04-16 00:09:02.749946366Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ab5ccb1b4f0e42bd80678c58d3330ba78f59d6acdea375ddb8487b94a3e557,PodSandboxId:16a1fcad619fb9858cf95428849d64e4b460a64124801b7c095f3dd616487edf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713226058599295462,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6b524648067e74a7474e617616cbf07b5cfa3884641ba982cdebdcb006d1b1,PodSandboxId:cdde1b0476e6b45e76f6c17e08a42915034ff2a2f13f84edd1d92298383a2f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713226044612150370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc7945e20ade550a66bb998ef22646661865ad45284f188ba6e1587fb32a4e40,PodSandboxId:f1a7ff92a40ef7366733dae31aa2e8b15f0ca073c2394c0594fab15f62dd18c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713226043600969243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f77c6b5d4b2208fa6eea38d1d96605adfcd20dd6066bc818ecf7fbfd5ce64a4,PodSandboxId:0e9faf34ef3728d1ac4db2d248157df5089aa59030cac0473d3269344359b179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713226040598038651,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fbcafa65cad837018bf187403310acf5ab60914a8243b0fd7d1f11db5749bd8,PodSandboxId:405c8eb6fa3ddbc512c8b2a9149eab41316500a745f750fb93e6bce5c1fcf398,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713226035006122203,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kubernetes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ece67cb0b89e483776ea5bea240e0ad3b4df15e4f3a9e4304627cf09c9fb73e,PodSandboxId:59855c2bfcb6823485252892145180c20c60375407f722d38125b292c621593e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713226018272055529,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11d49cc4234c7987e40c6a010ebfc82b,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:6cbda3b869057546eceda5804afdc7ab4b4ae44d5b847cbab9b96c00cf8783c9,PodSandboxId:f1a7ff92a40ef7366733dae31aa2e8b15f0ca073c2394c0594fab15f62dd18c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713226001397069475,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:3c5fc74cb4adad4166f2600954b0cdf2132c9dfab6bcebaa4ca82a828c264cc9,PodSandboxId:77394139e635e45b1747011d9ef79e2bfa982d467132852bf67c413000543289,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713226001455371861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a812f
61cab74eac095d0eaf25f044bcd3c0b40542f648850ed3358c4a9cf07,PodSandboxId:b88cd0e11b61fcb7e273bbae9b9f813e3e9a633a535dab171fd701ed953a2d7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713226001471393373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb07b1996f5bc461dcd7a9620a8
1b1e6c8ba652def7f66ead633fda77b4af08,PodSandboxId:633279be0125a119026ccc5953e99f34930a10d0a33d756cef37e4121d3a58f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713226001354706944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cecd296d2e7c995524a52118c632cabb6ea81bed6f1db33f69f2986ef86204d,PodSandboxId:c
dde1b0476e6b45e76f6c17e08a42915034ff2a2f13f84edd1d92298383a2f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713226001256489698,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b6c0ffbd95d979207c1b762faf871abef4bb7cf28b4e6d08021db0172f9168,PodSandbo
xId:0e9faf34ef3728d1ac4db2d248157df5089aa59030cac0473d3269344359b179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713226001225156029,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5916fb4a53a136bf90f5630e11363bf461e58fe804cb1e286d54ad6d31f93c96,PodSandboxId:16a1fcad619fb9858c
f95428849d64e4b460a64124801b7c095f3dd616487edf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713225996694034889,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca994edd19ee7bf77418af14b91ce946c17574b835c117e41bfd965b7c92ac5,PodSandboxId:0ec9ed06a0fb4e24a1215d14da18e280badfceceb20bcbcb8b905
ed30afe614a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225996450208724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64f8280e0c729e39a8b27c9117de8b73b1b7d2999de6a4769f1c577176f16f8,PodSandboxId:a371dbf543077fc460f104739a29dc30a26b86c08db4838ebda547bf3d5b5d72,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225996397214193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10abaa8fc3a416f4f6e6af525fcc65e0613ea769d731660a81e4e6a425fa4d6c,PodSandboxId:df7bc8cc3af912521d7dab8c802c0b04f7447ccb3d192040071875ff6a6ed89d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713225502012625031,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kuber
netes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d,PodSandboxId:773aba8a13222bacf0c0e79c78ec31764b5af16b9bc416140f303b36465cce2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713225349861459156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2,PodSandboxId:cc571f90808ddcdef413b709640e27f67d9d861628a9d232886db9a496a57712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713225349825077174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7,PodSandboxId:f34915e87e4008b765d7b34d6619b29c22eddc157e2e96893518ff9709538560,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713225346210711538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5,PodSandboxId:41e04a0d8a0ba492c448f0c8d919cb86eb887cc0a8198d99815e7f7eed50b944,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713225326796884514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf,PodSandboxId:cc8f87bd6e0dc433462a51cd028d6d774aa14a6c762f3f6a79999daea3870547,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1713225326704620344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=966c431a-82e6-4c23-a3a4-b3c9922ecd92 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	39ab5ccb1b4f0       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   16a1fcad619fb       kindnet-99cs7
	ad6b524648067       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      About a minute ago   Running             kube-controller-manager   2                   cdde1b0476e6b       kube-controller-manager-ha-694782
	fc7945e20ade5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   f1a7ff92a40ef       storage-provisioner
	5f77c6b5d4b22       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      About a minute ago   Running             kube-apiserver            3                   0e9faf34ef372       kube-apiserver-ha-694782
	6fbcafa65cad8       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   405c8eb6fa3dd       busybox-7fdf7869d9-vsvrq
	0ece67cb0b89e       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  0                   59855c2bfcb68       kube-vip-ha-694782
	f1a812f61cab7       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      2 minutes ago        Running             kube-scheduler            1                   b88cd0e11b61f       kube-scheduler-ha-694782
	3c5fc74cb4ada       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      2 minutes ago        Running             kube-proxy                1                   77394139e635e       kube-proxy-d46v5
	6cbda3b869057       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   f1a7ff92a40ef       storage-provisioner
	8fb07b1996f5b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   633279be0125a       etcd-ha-694782
	7cecd296d2e7c       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      2 minutes ago        Exited              kube-controller-manager   1                   cdde1b0476e6b       kube-controller-manager-ha-694782
	e2b6c0ffbd95d       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      2 minutes ago        Exited              kube-apiserver            2                   0e9faf34ef372       kube-apiserver-ha-694782
	5916fb4a53a13       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   16a1fcad619fb       kindnet-99cs7
	1ca994edd19ee       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   0ec9ed06a0fb4       coredns-76f75df574-zdc8q
	a64f8280e0c72       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   a371dbf543077       coredns-76f75df574-4sgv4
	10abaa8fc3a41       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   df7bc8cc3af91       busybox-7fdf7869d9-vsvrq
	a62edf63e9633       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   773aba8a13222       coredns-76f75df574-zdc8q
	b3a501d70f72c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   cc571f90808dd       coredns-76f75df574-4sgv4
	b55cb00c20162       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      13 minutes ago       Exited              kube-proxy                0                   f34915e87e400       kube-proxy-d46v5
	9d17ec84664ef       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      13 minutes ago       Exited              etcd                      0                   41e04a0d8a0ba       etcd-ha-694782
	553d7f07f43e6       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      13 minutes ago       Exited              kube-scheduler            0                   cc8f87bd6e0dc       kube-scheduler-ha-694782
	
	
	==> coredns [1ca994edd19ee7bf77418af14b91ce946c17574b835c117e41bfd965b7c92ac5] <==
	[INFO] plugin/kubernetes: Trace[1816213601]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 00:06:44.306) (total time: 10000ms):
	Trace[1816213601]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (00:06:54.306)
	Trace[1816213601]: [10.000861027s] [10.000861027s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1551518901]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 00:06:44.929) (total time: 10001ms):
	Trace[1551518901]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:06:54.931)
	Trace[1551518901]: [10.001896549s] [10.001896549s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d] <==
	[INFO] 10.244.0.4:58655 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000057875s
	[INFO] 10.244.1.2:57138 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000292175s
	[INFO] 10.244.1.2:42990 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.007959183s
	[INFO] 10.244.1.2:53242 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142606s
	[INFO] 10.244.1.2:53591 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169859s
	[INFO] 10.244.2.2:56926 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001802242s
	[INFO] 10.244.2.2:55053 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174333s
	[INFO] 10.244.2.2:56210 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000166019s
	[INFO] 10.244.2.2:36533 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001257882s
	[INFO] 10.244.0.4:39112 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127586s
	[INFO] 10.244.0.4:33597 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001242421s
	[INFO] 10.244.0.4:37595 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130691s
	[INFO] 10.244.0.4:36939 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000030566s
	[INFO] 10.244.0.4:36468 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043404s
	[INFO] 10.244.1.2:46854 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000237116s
	[INFO] 10.244.1.2:35618 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139683s
	[INFO] 10.244.2.2:54137 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000211246s
	[INFO] 10.244.2.2:57833 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097841s
	[INFO] 10.244.0.4:45317 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099201s
	[INFO] 10.244.1.2:46870 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000160521s
	[INFO] 10.244.1.2:49971 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118112s
	[INFO] 10.244.2.2:60977 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000163482s
	[INFO] 10.244.0.4:57367 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078337s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a64f8280e0c729e39a8b27c9117de8b73b1b7d2999de6a4769f1c577176f16f8] <==
	Trace[1441084192]: [10.001191606s] [10.001191606s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[416207084]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 00:06:46.533) (total time: 10001ms):
	Trace[416207084]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:06:56.534)
	Trace[416207084]: [10.001555902s] [10.001555902s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2] <==
	[INFO] 10.244.2.2:55011 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111047s
	[INFO] 10.244.2.2:60878 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096803s
	[INFO] 10.244.2.2:40329 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153524s
	[INFO] 10.244.2.2:43908 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109424s
	[INFO] 10.244.0.4:40588 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117575s
	[INFO] 10.244.0.4:34558 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001805219s
	[INFO] 10.244.0.4:44168 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194119s
	[INFO] 10.244.1.2:54750 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108471s
	[INFO] 10.244.1.2:46261 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008003s
	[INFO] 10.244.2.2:53899 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130847s
	[INFO] 10.244.2.2:52030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082631s
	[INFO] 10.244.0.4:39295 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000069381s
	[INFO] 10.244.0.4:38441 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054252s
	[INFO] 10.244.0.4:40273 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054634s
	[INFO] 10.244.1.2:56481 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181468s
	[INFO] 10.244.1.2:34800 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000244392s
	[INFO] 10.244.2.2:40684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136775s
	[INFO] 10.244.2.2:50964 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000154855s
	[INFO] 10.244.2.2:46132 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089888s
	[INFO] 10.244.0.4:34246 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124283s
	[INFO] 10.244.0.4:53924 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125381s
	[INFO] 10.244.0.4:36636 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000079286s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1858&timeout=6m16s&timeoutSeconds=376&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> describe nodes <==
	Name:               ha-694782
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-694782
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=ha-694782
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T23_55_34_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 23:55:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-694782
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:08:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 00:07:25 +0000   Mon, 15 Apr 2024 23:55:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 00:07:25 +0000   Mon, 15 Apr 2024 23:55:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 00:07:25 +0000   Mon, 15 Apr 2024 23:55:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 00:07:25 +0000   Mon, 15 Apr 2024 23:55:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    ha-694782
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3887d262ea345b0b06d0cfe81d3c704
	  System UUID:                e3887d26-2ea3-45b0-b06d-0cfe81d3c704
	  Boot ID:                    db04bec2-a6d7-4f51-8173-a431f51db6a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-vsvrq             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-76f75df574-4sgv4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-76f75df574-zdc8q             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-694782                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-99cs7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-694782             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-694782    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-d46v5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-694782             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-694782                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 97s                    kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-694782 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-694782 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-694782 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-694782 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-694782 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-694782 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-694782 event: Registered Node ha-694782 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-694782 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-694782 event: Registered Node ha-694782 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-694782 event: Registered Node ha-694782 in Controller
	  Warning  ContainerGCFailed        2m30s (x2 over 3m30s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           92s                    node-controller  Node ha-694782 event: Registered Node ha-694782 in Controller
	  Normal   RegisteredNode           88s                    node-controller  Node ha-694782 event: Registered Node ha-694782 in Controller
	  Normal   RegisteredNode           33s                    node-controller  Node ha-694782 event: Registered Node ha-694782 in Controller
	
	
	Name:               ha-694782-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-694782-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=ha-694782
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_15T23_56_49_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 23:56:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-694782-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:08:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 00:08:05 +0000   Tue, 16 Apr 2024 00:07:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 00:08:05 +0000   Tue, 16 Apr 2024 00:07:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 00:08:05 +0000   Tue, 16 Apr 2024 00:07:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 00:08:05 +0000   Tue, 16 Apr 2024 00:07:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.42
	  Hostname:    ha-694782-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f33e7ca96e8a461196cc015dc9cdb390
	  System UUID:                f33e7ca9-6e8a-4611-96cc-015dc9cdb390
	  Boot ID:                    cb7bc6ac-1cdb-414c-9a20-0ca4dbfea336
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-bwtdm                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-694782-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-qvp8b                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-694782-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-694782-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-vbfhn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-694782-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-694782-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 80s                  kube-proxy       
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-694782-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-694782-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)    kubelet          Node ha-694782-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                  node-controller  Node ha-694782-m02 event: Registered Node ha-694782-m02 in Controller
	  Normal  RegisteredNode           12m                  node-controller  Node ha-694782-m02 event: Registered Node ha-694782-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-694782-m02 event: Registered Node ha-694782-m02 in Controller
	  Normal  NodeNotReady             9m3s                 node-controller  Node ha-694782-m02 status is now: NodeNotReady
	  Normal  Starting                 2m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node ha-694782-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node ha-694782-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x7 over 2m6s)  kubelet          Node ha-694782-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           92s                  node-controller  Node ha-694782-m02 event: Registered Node ha-694782-m02 in Controller
	  Normal  RegisteredNode           88s                  node-controller  Node ha-694782-m02 event: Registered Node ha-694782-m02 in Controller
	  Normal  RegisteredNode           33s                  node-controller  Node ha-694782-m02 event: Registered Node ha-694782-m02 in Controller
	
	
	Name:               ha-694782-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-694782-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=ha-694782
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_15T23_57_59_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 23:57:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-694782-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:08:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 00:08:33 +0000   Mon, 15 Apr 2024 23:57:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 00:08:33 +0000   Mon, 15 Apr 2024 23:57:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 00:08:33 +0000   Mon, 15 Apr 2024 23:57:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 00:08:33 +0000   Mon, 15 Apr 2024 23:58:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.202
	  Hostname:    ha-694782-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7c9ed2323d6414d9d68e5da836956f9
	  System UUID:                f7c9ed23-23d6-414d-9d68-e5da836956f9
	  Boot ID:                    4494123a-8787-47d2-9eb0-ca11c0b8b659
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-mxz6n                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-694782-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-hln6n                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-694782-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-694782-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-45tb9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-694782-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-694782-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 41s                kube-proxy       
	  Normal   RegisteredNode           11m                node-controller  Node ha-694782-m03 event: Registered Node ha-694782-m03 in Controller
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-694782-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-694782-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-694782-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-694782-m03 event: Registered Node ha-694782-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-694782-m03 event: Registered Node ha-694782-m03 in Controller
	  Normal   RegisteredNode           91s                node-controller  Node ha-694782-m03 event: Registered Node ha-694782-m03 in Controller
	  Normal   RegisteredNode           88s                node-controller  Node ha-694782-m03 event: Registered Node ha-694782-m03 in Controller
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  61s (x2 over 61s)  kubelet          Node ha-694782-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x2 over 61s)  kubelet          Node ha-694782-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x2 over 61s)  kubelet          Node ha-694782-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  61s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 61s                kubelet          Node ha-694782-m03 has been rebooted, boot id: 4494123a-8787-47d2-9eb0-ca11c0b8b659
	  Normal   RegisteredNode           33s                node-controller  Node ha-694782-m03 event: Registered Node ha-694782-m03 in Controller
	
	
	Name:               ha-694782-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-694782-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=ha-694782
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_15T23_58_56_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 23:58:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-694782-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:08:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 00:08:55 +0000   Tue, 16 Apr 2024 00:08:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 00:08:55 +0000   Tue, 16 Apr 2024 00:08:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 00:08:55 +0000   Tue, 16 Apr 2024 00:08:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 00:08:55 +0000   Tue, 16 Apr 2024 00:08:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    ha-694782-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aceb25acc5a84fcca647b3b66273edbd
	  System UUID:                aceb25ac-c5a8-4fcc-a647-b3b66273edbd
	  Boot ID:                    881b78ba-6d0b-4ba4-9e4e-14adbfa06532
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-k6vbr       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-mgwnv    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-694782-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-694782-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-694782-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller
	  Normal   NodeReady                9m57s              kubelet          Node ha-694782-m04 status is now: NodeReady
	  Normal   RegisteredNode           91s                node-controller  Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller
	  Normal   RegisteredNode           88s                node-controller  Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller
	  Normal   NodeNotReady             51s                node-controller  Node ha-694782-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           33s                node-controller  Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-694782-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-694782-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-694782-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-694782-m04 has been rebooted, boot id: 881b78ba-6d0b-4ba4-9e4e-14adbfa06532
	  Normal   NodeReady                8s                 kubelet          Node ha-694782-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.056422] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063543] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.160170] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.142019] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.294658] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.386960] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.057175] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.857368] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +1.229656] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.643467] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.095801] kauditd_printk_skb: 40 callbacks suppressed
	[ +12.797173] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.821758] kauditd_printk_skb: 72 callbacks suppressed
	[Apr16 00:03] kauditd_printk_skb: 1 callbacks suppressed
	[Apr16 00:06] systemd-fstab-generator[3747]: Ignoring "noauto" option for root device
	[  +0.154687] systemd-fstab-generator[3759]: Ignoring "noauto" option for root device
	[  +0.183230] systemd-fstab-generator[3773]: Ignoring "noauto" option for root device
	[  +0.154031] systemd-fstab-generator[3785]: Ignoring "noauto" option for root device
	[  +0.284896] systemd-fstab-generator[3813]: Ignoring "noauto" option for root device
	[  +6.302743] systemd-fstab-generator[3919]: Ignoring "noauto" option for root device
	[  +0.087090] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.604373] kauditd_printk_skb: 42 callbacks suppressed
	[ +17.255875] kauditd_printk_skb: 56 callbacks suppressed
	[Apr16 00:07] kauditd_printk_skb: 2 callbacks suppressed
	[ +22.629454] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [8fb07b1996f5bc461dcd7a9620a81b1e6c8ba652def7f66ead633fda77b4af08] <==
	{"level":"warn","ts":"2024-04-16T00:07:56.558053Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ee1f4cd48e860d39","error":"Get \"https://192.168.39.202:2380/version\": dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T00:07:57.097388Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ee1f4cd48e860d39","rtt":"0s","error":"dial tcp 192.168.39.202:2380: i/o timeout"}
	{"level":"warn","ts":"2024-04-16T00:07:57.097448Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ee1f4cd48e860d39","rtt":"0s","error":"dial tcp 192.168.39.202:2380: i/o timeout"}
	{"level":"warn","ts":"2024-04-16T00:08:00.56017Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.202:2380/version","remote-member-id":"ee1f4cd48e860d39","error":"Get \"https://192.168.39.202:2380/version\": dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T00:08:00.560248Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ee1f4cd48e860d39","error":"Get \"https://192.168.39.202:2380/version\": dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T00:08:02.098101Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ee1f4cd48e860d39","rtt":"0s","error":"dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T00:08:02.098303Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ee1f4cd48e860d39","rtt":"0s","error":"dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T00:08:04.562194Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.202:2380/version","remote-member-id":"ee1f4cd48e860d39","error":"Get \"https://192.168.39.202:2380/version\": dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T00:08:04.562262Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ee1f4cd48e860d39","error":"Get \"https://192.168.39.202:2380/version\": dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T00:08:07.099397Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ee1f4cd48e860d39","rtt":"0s","error":"dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T00:08:07.099564Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ee1f4cd48e860d39","rtt":"0s","error":"dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T00:08:08.56456Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.202:2380/version","remote-member-id":"ee1f4cd48e860d39","error":"Get \"https://192.168.39.202:2380/version\": dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T00:08:08.5648Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ee1f4cd48e860d39","error":"Get \"https://192.168.39.202:2380/version\": dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T00:08:12.099946Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ee1f4cd48e860d39","rtt":"0s","error":"dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T00:08:12.100099Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ee1f4cd48e860d39","rtt":"0s","error":"dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T00:08:12.566783Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.202:2380/version","remote-member-id":"ee1f4cd48e860d39","error":"Get \"https://192.168.39.202:2380/version\": dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-04-16T00:08:12.566879Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"ee1f4cd48e860d39","error":"Get \"https://192.168.39.202:2380/version\": dial tcp 192.168.39.202:2380: connect: connection refused"}
	{"level":"info","ts":"2024-04-16T00:08:12.827794Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:08:12.829809Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"903e0dada8362847","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:08:12.832487Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"903e0dada8362847","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:08:12.853834Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"903e0dada8362847","to":"ee1f4cd48e860d39","stream-type":"stream Message"}
	{"level":"info","ts":"2024-04-16T00:08:12.853883Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"903e0dada8362847","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:08:12.854891Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"903e0dada8362847","to":"ee1f4cd48e860d39","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-16T00:08:12.854947Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"903e0dada8362847","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:08:17.575347Z","caller":"traceutil/trace.go:171","msg":"trace[398373057] transaction","detail":"{read_only:false; response_revision:2402; number_of_response:1; }","duration":"110.234723ms","start":"2024-04-16T00:08:17.4651Z","end":"2024-04-16T00:08:17.575335Z","steps":["trace[398373057] 'process raft request'  (duration: 104.094338ms)"],"step_count":1}
	
	
	==> etcd [9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5] <==
	2024/04/16 00:04:55 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/16 00:04:55 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/16 00:04:55 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/16 00:04:55 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/16 00:04:55 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-16T00:04:55.866376Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.41:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T00:04:55.866653Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.41:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-16T00:04:55.866776Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"903e0dada8362847","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-16T00:04:55.867165Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"dc6497bb1dcd7fb3"}
	{"level":"info","ts":"2024-04-16T00:04:55.867227Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"dc6497bb1dcd7fb3"}
	{"level":"info","ts":"2024-04-16T00:04:55.867277Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"dc6497bb1dcd7fb3"}
	{"level":"info","ts":"2024-04-16T00:04:55.867426Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3"}
	{"level":"info","ts":"2024-04-16T00:04:55.867479Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3"}
	{"level":"info","ts":"2024-04-16T00:04:55.867535Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3"}
	{"level":"info","ts":"2024-04-16T00:04:55.867563Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"dc6497bb1dcd7fb3"}
	{"level":"info","ts":"2024-04-16T00:04:55.867586Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:04:55.867612Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:04:55.867656Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:04:55.867788Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"903e0dada8362847","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:04:55.867952Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"903e0dada8362847","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:04:55.868092Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"903e0dada8362847","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:04:55.86815Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:04:55.871091Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.41:2380"}
	{"level":"info","ts":"2024-04-16T00:04:55.871257Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.41:2380"}
	{"level":"info","ts":"2024-04-16T00:04:55.871302Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-694782","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.41:2380"],"advertise-client-urls":["https://192.168.39.41:2379"]}
	
	
	==> kernel <==
	 00:09:03 up 14 min,  0 users,  load average: 0.65, 0.49, 0.32
	Linux ha-694782 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [39ab5ccb1b4f0e42bd80678c58d3330ba78f59d6acdea375ddb8487b94a3e557] <==
	I0416 00:08:29.528874       1 main.go:250] Node ha-694782-m04 has CIDR [10.244.3.0/24] 
	I0416 00:08:39.554591       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0416 00:08:39.554689       1 main.go:227] handling current node
	I0416 00:08:39.554718       1 main.go:223] Handling node with IPs: map[192.168.39.42:{}]
	I0416 00:08:39.554740       1 main.go:250] Node ha-694782-m02 has CIDR [10.244.1.0/24] 
	I0416 00:08:39.554905       1 main.go:223] Handling node with IPs: map[192.168.39.202:{}]
	I0416 00:08:39.554933       1 main.go:250] Node ha-694782-m03 has CIDR [10.244.2.0/24] 
	I0416 00:08:39.555161       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0416 00:08:39.555205       1 main.go:250] Node ha-694782-m04 has CIDR [10.244.3.0/24] 
	I0416 00:08:49.573956       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0416 00:08:49.574132       1 main.go:227] handling current node
	I0416 00:08:49.574164       1 main.go:223] Handling node with IPs: map[192.168.39.42:{}]
	I0416 00:08:49.574172       1 main.go:250] Node ha-694782-m02 has CIDR [10.244.1.0/24] 
	I0416 00:08:49.574386       1 main.go:223] Handling node with IPs: map[192.168.39.202:{}]
	I0416 00:08:49.574472       1 main.go:250] Node ha-694782-m03 has CIDR [10.244.2.0/24] 
	I0416 00:08:49.574599       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0416 00:08:49.574696       1 main.go:250] Node ha-694782-m04 has CIDR [10.244.3.0/24] 
	I0416 00:08:59.595030       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0416 00:08:59.595195       1 main.go:227] handling current node
	I0416 00:08:59.595286       1 main.go:223] Handling node with IPs: map[192.168.39.42:{}]
	I0416 00:08:59.595319       1 main.go:250] Node ha-694782-m02 has CIDR [10.244.1.0/24] 
	I0416 00:08:59.595423       1 main.go:223] Handling node with IPs: map[192.168.39.202:{}]
	I0416 00:08:59.595443       1 main.go:250] Node ha-694782-m03 has CIDR [10.244.2.0/24] 
	I0416 00:08:59.595506       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0416 00:08:59.595524       1 main.go:250] Node ha-694782-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [5916fb4a53a136bf90f5630e11363bf461e58fe804cb1e286d54ad6d31f93c96] <==
	I0416 00:06:37.254246       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0416 00:06:37.254344       1 main.go:107] hostIP = 192.168.39.41
	podIP = 192.168.39.41
	I0416 00:06:37.254570       1 main.go:116] setting mtu 1500 for CNI 
	I0416 00:06:37.254597       1 main.go:146] kindnetd IP family: "ipv4"
	I0416 00:06:37.254620       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0416 00:06:37.557191       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0416 00:06:37.557624       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0416 00:06:38.560303       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0416 00:06:40.561133       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0416 00:06:53.569370       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [5f77c6b5d4b2208fa6eea38d1d96605adfcd20dd6066bc818ecf7fbfd5ce64a4] <==
	I0416 00:07:22.613731       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0416 00:07:22.613762       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0416 00:07:22.613796       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0416 00:07:22.613950       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0416 00:07:22.619154       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0416 00:07:22.703830       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 00:07:22.710710       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 00:07:22.762554       1 shared_informer.go:318] Caches are synced for configmaps
	I0416 00:07:22.763782       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 00:07:22.768501       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 00:07:22.768568       1 aggregator.go:165] initial CRD sync complete...
	I0416 00:07:22.768604       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 00:07:22.768626       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 00:07:22.768667       1 cache.go:39] Caches are synced for autoregister controller
	I0416 00:07:22.770164       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0416 00:07:22.770252       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0416 00:07:22.777094       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0416 00:07:22.779545       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	W0416 00:07:22.788572       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.202 192.168.39.42]
	I0416 00:07:22.789914       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 00:07:22.797232       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0416 00:07:22.801159       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0416 00:07:23.566521       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0416 00:07:24.017955       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.202 192.168.39.41 192.168.39.42]
	W0416 00:07:34.033455       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.41 192.168.39.42]
	
	
	==> kube-apiserver [e2b6c0ffbd95d979207c1b762faf871abef4bb7cf28b4e6d08021db0172f9168] <==
	I0416 00:06:41.762492       1 options.go:222] external host was not specified, using 192.168.39.41
	I0416 00:06:41.766786       1 server.go:148] Version: v1.29.3
	I0416 00:06:41.766835       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 00:06:42.411116       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0416 00:06:42.418839       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0416 00:06:42.418928       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0416 00:06:42.419248       1 instance.go:297] Using reconciler: lease
	W0416 00:07:02.409306       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0416 00:07:02.410718       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0416 00:07:02.420373       1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [7cecd296d2e7c995524a52118c632cabb6ea81bed6f1db33f69f2986ef86204d] <==
	I0416 00:06:42.650659       1 serving.go:380] Generated self-signed cert in-memory
	I0416 00:06:43.060651       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0416 00:06:43.060748       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 00:06:43.062884       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0416 00:06:43.063218       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0416 00:06:43.064054       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0416 00:06:43.064153       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0416 00:07:03.426338       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.41:8443/healthz\": dial tcp 192.168.39.41:8443: connect: connection refused"
	
	
	==> kube-controller-manager [ad6b524648067e74a7474e617616cbf07b5cfa3884641ba982cdebdcb006d1b1] <==
	I0416 00:07:35.880904       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0416 00:07:35.914727       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0416 00:07:35.964881       1 shared_informer.go:318] Caches are synced for resource quota
	I0416 00:07:35.992024       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0416 00:07:35.995719       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0416 00:07:36.019483       1 shared_informer.go:318] Caches are synced for resource quota
	I0416 00:07:36.032321       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0416 00:07:36.396422       1 shared_informer.go:318] Caches are synced for garbage collector
	I0416 00:07:36.441754       1 shared_informer.go:318] Caches are synced for garbage collector
	I0416 00:07:36.441888       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0416 00:07:39.705438       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="failed to update kube-dns-s7hp9 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-s7hp9\": the object has been modified; please apply your changes to the latest version and try again"
	I0416 00:07:39.705799       1 event.go:364] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"f98ecc37-b26a-4ca7-b0ad-54d689a454f0", APIVersion:"v1", ResourceVersion:"288", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-s7hp9 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-s7hp9": the object has been modified; please apply your changes to the latest version and try again
	I0416 00:07:39.714498       1 event.go:376] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I0416 00:07:39.732678       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="76.802777ms"
	I0416 00:07:39.732910       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="148.689µs"
	I0416 00:07:41.636604       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="17.091824ms"
	I0416 00:07:41.636746       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="51.748µs"
	I0416 00:08:03.587963       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="12.159755ms"
	I0416 00:08:03.592133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="71.624µs"
	I0416 00:08:09.692256       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="29.914994ms"
	I0416 00:08:09.692486       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="100.314µs"
	I0416 00:08:09.692605       1 event.go:376] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I0416 00:08:20.903772       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="36.260497ms"
	I0416 00:08:20.903956       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="95.971µs"
	I0416 00:08:55.208651       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-694782-m04"
	
	
	==> kube-proxy [3c5fc74cb4adad4166f2600954b0cdf2132c9dfab6bcebaa4ca82a828c264cc9] <==
	I0416 00:06:42.858915       1 server_others.go:72] "Using iptables proxy"
	E0416 00:06:45.864857       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-694782\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0416 00:06:48.938054       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-694782\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0416 00:06:52.010357       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-694782\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0416 00:06:58.153144       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-694782\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0416 00:07:07.368849       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-694782\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0416 00:07:25.802672       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-694782\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0416 00:07:25.803209       1 server.go:1020] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0416 00:07:25.941476       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 00:07:25.941643       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 00:07:25.942453       1 server_others.go:168] "Using iptables Proxier"
	I0416 00:07:25.947634       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 00:07:25.948301       1 server.go:865] "Version info" version="v1.29.3"
	I0416 00:07:25.948413       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 00:07:25.950904       1 config.go:188] "Starting service config controller"
	I0416 00:07:25.952412       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 00:07:25.952676       1 config.go:97] "Starting endpoint slice config controller"
	I0416 00:07:25.952710       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 00:07:25.954092       1 config.go:315] "Starting node config controller"
	I0416 00:07:25.954153       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 00:07:26.053241       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 00:07:26.053349       1 shared_informer.go:318] Caches are synced for service config
	I0416 00:07:26.056248       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7] <==
	E0416 00:03:38.472655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:03:41.544468       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1896": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:03:41.544600       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1896": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:03:44.616420       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:03:44.616530       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:03:44.616699       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-694782&resourceVersion=1824": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:03:44.616762       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-694782&resourceVersion=1824": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:03:47.688819       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1896": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:03:47.688931       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1896": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:03:53.833870       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-694782&resourceVersion=1824": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:03:53.834157       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-694782&resourceVersion=1824": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:03:56.905187       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:03:56.905285       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:03:59.980235       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1896": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:03:59.980295       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1896": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:04:12.266432       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:04:12.266502       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:04:15.338197       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1896": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:04:15.338258       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1896": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:04:15.338403       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-694782&resourceVersion=1824": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:04:15.338459       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-694782&resourceVersion=1824": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:04:52.201502       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-694782&resourceVersion=1824": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:04:52.201909       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-694782&resourceVersion=1824": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:04:52.201865       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:04:52.201962       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf] <==
	E0416 00:04:51.481372       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 00:04:51.592406       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 00:04:51.592476       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 00:04:51.593272       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 00:04:51.593322       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 00:04:51.777335       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 00:04:51.777441       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 00:04:51.982388       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 00:04:51.982484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 00:04:51.996801       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 00:04:51.996893       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 00:04:52.294230       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 00:04:52.294281       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 00:04:52.370669       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 00:04:52.370760       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 00:04:52.460441       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 00:04:52.460546       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 00:04:54.931842       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 00:04:54.931867       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 00:04:55.360249       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 00:04:55.360343       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0416 00:04:55.788958       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0416 00:04:55.794313       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0416 00:04:55.795184       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0416 00:04:55.802469       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f1a812f61cab74eac095d0eaf25f044bcd3c0b40542f648850ed3358c4a9cf07] <==
	W0416 00:07:13.024088       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: Get "https://192.168.39.41:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0416 00:07:13.024231       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.41:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0416 00:07:18.012955       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.41:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0416 00:07:18.013101       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.41:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0416 00:07:18.590806       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.41:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0416 00:07:18.590882       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.41:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0416 00:07:18.647067       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://192.168.39.41:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0416 00:07:18.647151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.41:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0416 00:07:19.286748       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: Get "https://192.168.39.41:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0416 00:07:19.286866       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.41:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0416 00:07:19.972668       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: Get "https://192.168.39.41:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0416 00:07:19.972737       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.41:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0416 00:07:20.547074       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: Get "https://192.168.39.41:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0416 00:07:20.547138       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.41:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0416 00:07:22.640073       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 00:07:22.641024       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 00:07:22.640608       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 00:07:22.641264       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 00:07:22.640817       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 00:07:22.643294       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0416 00:07:22.640937       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 00:07:22.643420       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 00:07:22.643478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 00:07:22.643422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0416 00:07:38.034632       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 00:07:22 ha-694782 kubelet[1369]: W0416 00:07:22.729446    1369 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-694782&resourceVersion=1868": dial tcp 192.168.39.254:8443: connect: no route to host
	Apr 16 00:07:22 ha-694782 kubelet[1369]: E0416 00:07:22.729535    1369 event.go:355] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/events/kube-apiserver-ha-694782.17c699c4f662de4d\": dial tcp 192.168.39.254:8443: connect: no route to host" event="&Event{ObjectMeta:{kube-apiserver-ha-694782.17c699c4f662de4d  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ha-694782,UID:a733f3b6fc63c6f5e84f944f7d76e1a4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ha-694782,},FirstTimestamp:2024-04-16 00:02:59.487366733 +0000 UTC m=+446.135860573,LastTimestamp:2024-04-16 00:03:03.498844709 +0000 UTC m=+450.147338567,Count:2,Type:Warning,EventTime:0001-01-01 0
0:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-694782,}"
	Apr 16 00:07:22 ha-694782 kubelet[1369]: E0416 00:07:22.729612    1369 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-694782\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-694782?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Apr 16 00:07:22 ha-694782 kubelet[1369]: E0416 00:07:22.730346    1369 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-694782&resourceVersion=1868": dial tcp 192.168.39.254:8443: connect: no route to host
	Apr 16 00:07:22 ha-694782 kubelet[1369]: I0416 00:07:22.729667    1369 status_manager.go:853] "Failed to get status for pod" podUID="6a7e1a29-8c75-4d1f-978b-471ac0adb888" pod="kube-system/coredns-76f75df574-zdc8q" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-zdc8q\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Apr 16 00:07:23 ha-694782 kubelet[1369]: I0416 00:07:23.588162    1369 scope.go:117] "RemoveContainer" containerID="5916fb4a53a136bf90f5630e11363bf461e58fe804cb1e286d54ad6d31f93c96"
	Apr 16 00:07:23 ha-694782 kubelet[1369]: E0416 00:07:23.588492    1369 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-99cs7_kube-system(5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3)\"" pod="kube-system/kindnet-99cs7" podUID="5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3"
	Apr 16 00:07:23 ha-694782 kubelet[1369]: I0416 00:07:23.588810    1369 scope.go:117] "RemoveContainer" containerID="6cbda3b869057546eceda5804afdc7ab4b4ae44d5b847cbab9b96c00cf8783c9"
	Apr 16 00:07:24 ha-694782 kubelet[1369]: I0416 00:07:24.587776    1369 scope.go:117] "RemoveContainer" containerID="7cecd296d2e7c995524a52118c632cabb6ea81bed6f1db33f69f2986ef86204d"
	Apr 16 00:07:25 ha-694782 kubelet[1369]: I0416 00:07:25.800578    1369 status_manager.go:853] "Failed to get status for pod" podUID="3c1f65c0-37b2-4c88-879b-68297e989d44" pod="kube-system/coredns-76f75df574-4sgv4" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/coredns-76f75df574-4sgv4\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Apr 16 00:07:25 ha-694782 kubelet[1369]: E0416 00:07:25.800822    1369 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-694782\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-694782?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Apr 16 00:07:33 ha-694782 kubelet[1369]: I0416 00:07:33.627085    1369 scope.go:117] "RemoveContainer" containerID="b3559e8687b295e6fdcb60e5e4050bd11f66bbe0eadc1cb26920a345a5ac4764"
	Apr 16 00:07:33 ha-694782 kubelet[1369]: E0416 00:07:33.670827    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 00:07:33 ha-694782 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 00:07:33 ha-694782 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 00:07:33 ha-694782 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 00:07:33 ha-694782 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 00:07:38 ha-694782 kubelet[1369]: I0416 00:07:38.588141    1369 scope.go:117] "RemoveContainer" containerID="5916fb4a53a136bf90f5630e11363bf461e58fe804cb1e286d54ad6d31f93c96"
	Apr 16 00:08:11 ha-694782 kubelet[1369]: I0416 00:08:11.587557    1369 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-vip-ha-694782" podUID="a8ffb1b9-f55e-4efe-b9a1-7e58a341a2f0"
	Apr 16 00:08:11 ha-694782 kubelet[1369]: I0416 00:08:11.613949    1369 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-694782"
	Apr 16 00:08:33 ha-694782 kubelet[1369]: E0416 00:08:33.658907    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 00:08:33 ha-694782 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 00:08:33 ha-694782 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 00:08:33 ha-694782 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 00:08:33 ha-694782 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:09:02.267725   33135 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18647-7542/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-694782 -n ha-694782
helpers_test.go:261: (dbg) Run:  kubectl --context ha-694782 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (371.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-694782 stop -v=7 --alsologtostderr: exit status 82 (2m0.480028131s)

                                                
                                                
-- stdout --
	* Stopping node "ha-694782-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:09:22.372538   33543 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:09:22.372661   33543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:09:22.372671   33543 out.go:304] Setting ErrFile to fd 2...
	I0416 00:09:22.372677   33543 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:09:22.372893   33543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:09:22.373123   33543 out.go:298] Setting JSON to false
	I0416 00:09:22.373229   33543 mustload.go:65] Loading cluster: ha-694782
	I0416 00:09:22.373605   33543 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:09:22.373711   33543 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0416 00:09:22.373921   33543 mustload.go:65] Loading cluster: ha-694782
	I0416 00:09:22.374098   33543 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:09:22.374133   33543 stop.go:39] StopHost: ha-694782-m04
	I0416 00:09:22.374514   33543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:09:22.374562   33543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:09:22.388797   33543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43147
	I0416 00:09:22.389205   33543 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:09:22.389774   33543 main.go:141] libmachine: Using API Version  1
	I0416 00:09:22.389820   33543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:09:22.390139   33543 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:09:22.392655   33543 out.go:177] * Stopping node "ha-694782-m04"  ...
	I0416 00:09:22.394611   33543 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0416 00:09:22.394645   33543 main.go:141] libmachine: (ha-694782-m04) Calling .DriverName
	I0416 00:09:22.394860   33543 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0416 00:09:22.394883   33543 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHHostname
	I0416 00:09:22.397757   33543 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:09:22.398235   33543 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 01:08:46 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:09:22.398264   33543 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:09:22.398498   33543 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHPort
	I0416 00:09:22.398660   33543 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHKeyPath
	I0416 00:09:22.398817   33543 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHUsername
	I0416 00:09:22.398939   33543 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m04/id_rsa Username:docker}
	I0416 00:09:22.484464   33543 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0416 00:09:22.540514   33543 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0416 00:09:22.595808   33543 main.go:141] libmachine: Stopping "ha-694782-m04"...
	I0416 00:09:22.595863   33543 main.go:141] libmachine: (ha-694782-m04) Calling .GetState
	I0416 00:09:22.597294   33543 main.go:141] libmachine: (ha-694782-m04) Calling .Stop
	I0416 00:09:22.601143   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 0/120
	I0416 00:09:23.602844   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 1/120
	I0416 00:09:24.605176   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 2/120
	I0416 00:09:25.606541   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 3/120
	I0416 00:09:26.607870   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 4/120
	I0416 00:09:27.609875   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 5/120
	I0416 00:09:28.611476   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 6/120
	I0416 00:09:29.612786   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 7/120
	I0416 00:09:30.614238   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 8/120
	I0416 00:09:31.615577   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 9/120
	I0416 00:09:32.618008   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 10/120
	I0416 00:09:33.619708   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 11/120
	I0416 00:09:34.621217   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 12/120
	I0416 00:09:35.622813   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 13/120
	I0416 00:09:36.624116   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 14/120
	I0416 00:09:37.625973   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 15/120
	I0416 00:09:38.627607   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 16/120
	I0416 00:09:39.629084   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 17/120
	I0416 00:09:40.630553   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 18/120
	I0416 00:09:41.631901   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 19/120
	I0416 00:09:42.633917   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 20/120
	I0416 00:09:43.635675   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 21/120
	I0416 00:09:44.636984   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 22/120
	I0416 00:09:45.638392   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 23/120
	I0416 00:09:46.639578   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 24/120
	I0416 00:09:47.641770   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 25/120
	I0416 00:09:48.643957   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 26/120
	I0416 00:09:49.645201   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 27/120
	I0416 00:09:50.646744   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 28/120
	I0416 00:09:51.647896   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 29/120
	I0416 00:09:52.649986   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 30/120
	I0416 00:09:53.651744   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 31/120
	I0416 00:09:54.653297   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 32/120
	I0416 00:09:55.654735   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 33/120
	I0416 00:09:56.656229   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 34/120
	I0416 00:09:57.658126   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 35/120
	I0416 00:09:58.659429   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 36/120
	I0416 00:09:59.660852   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 37/120
	I0416 00:10:00.662144   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 38/120
	I0416 00:10:01.663489   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 39/120
	I0416 00:10:02.664977   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 40/120
	I0416 00:10:03.667066   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 41/120
	I0416 00:10:04.668390   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 42/120
	I0416 00:10:05.670002   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 43/120
	I0416 00:10:06.671181   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 44/120
	I0416 00:10:07.673046   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 45/120
	I0416 00:10:08.674480   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 46/120
	I0416 00:10:09.675920   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 47/120
	I0416 00:10:10.677300   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 48/120
	I0416 00:10:11.678562   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 49/120
	I0416 00:10:12.680832   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 50/120
	I0416 00:10:13.682099   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 51/120
	I0416 00:10:14.683394   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 52/120
	I0416 00:10:15.684765   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 53/120
	I0416 00:10:16.686474   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 54/120
	I0416 00:10:17.688508   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 55/120
	I0416 00:10:18.689714   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 56/120
	I0416 00:10:19.691060   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 57/120
	I0416 00:10:20.692904   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 58/120
	I0416 00:10:21.694382   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 59/120
	I0416 00:10:22.696406   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 60/120
	I0416 00:10:23.697717   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 61/120
	I0416 00:10:24.698935   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 62/120
	I0416 00:10:25.700261   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 63/120
	I0416 00:10:26.701605   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 64/120
	I0416 00:10:27.703712   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 65/120
	I0416 00:10:28.704901   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 66/120
	I0416 00:10:29.706293   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 67/120
	I0416 00:10:30.708145   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 68/120
	I0416 00:10:31.709592   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 69/120
	I0416 00:10:32.711552   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 70/120
	I0416 00:10:33.713046   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 71/120
	I0416 00:10:34.714469   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 72/120
	I0416 00:10:35.715677   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 73/120
	I0416 00:10:36.717755   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 74/120
	I0416 00:10:37.719734   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 75/120
	I0416 00:10:38.722100   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 76/120
	I0416 00:10:39.723424   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 77/120
	I0416 00:10:40.724770   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 78/120
	I0416 00:10:41.726217   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 79/120
	I0416 00:10:42.728265   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 80/120
	I0416 00:10:43.729912   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 81/120
	I0416 00:10:44.731237   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 82/120
	I0416 00:10:45.732664   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 83/120
	I0416 00:10:46.733950   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 84/120
	I0416 00:10:47.735714   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 85/120
	I0416 00:10:48.737477   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 86/120
	I0416 00:10:49.739523   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 87/120
	I0416 00:10:50.741584   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 88/120
	I0416 00:10:51.742842   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 89/120
	I0416 00:10:52.744724   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 90/120
	I0416 00:10:53.746624   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 91/120
	I0416 00:10:54.748002   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 92/120
	I0416 00:10:55.749136   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 93/120
	I0416 00:10:56.750333   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 94/120
	I0416 00:10:57.751947   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 95/120
	I0416 00:10:58.753292   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 96/120
	I0416 00:10:59.755536   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 97/120
	I0416 00:11:00.756697   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 98/120
	I0416 00:11:01.758008   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 99/120
	I0416 00:11:02.760136   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 100/120
	I0416 00:11:03.761427   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 101/120
	I0416 00:11:04.763549   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 102/120
	I0416 00:11:05.764846   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 103/120
	I0416 00:11:06.766353   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 104/120
	I0416 00:11:07.767898   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 105/120
	I0416 00:11:08.769402   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 106/120
	I0416 00:11:09.771808   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 107/120
	I0416 00:11:10.773294   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 108/120
	I0416 00:11:11.774593   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 109/120
	I0416 00:11:12.776059   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 110/120
	I0416 00:11:13.777310   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 111/120
	I0416 00:11:14.779644   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 112/120
	I0416 00:11:15.781095   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 113/120
	I0416 00:11:16.782354   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 114/120
	I0416 00:11:17.784440   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 115/120
	I0416 00:11:18.785663   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 116/120
	I0416 00:11:19.787657   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 117/120
	I0416 00:11:20.788973   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 118/120
	I0416 00:11:21.790276   33543 main.go:141] libmachine: (ha-694782-m04) Waiting for machine to stop 119/120
	I0416 00:11:22.790799   33543 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0416 00:11:22.790876   33543 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0416 00:11:22.792924   33543 out.go:177] 
	W0416 00:11:22.794381   33543 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0416 00:11:22.794400   33543 out.go:239] * 
	* 
	W0416 00:11:22.796515   33543 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 00:11:22.797931   33543 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-694782 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr: exit status 3 (18.98595863s)

                                                
                                                
-- stdout --
	ha-694782
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-694782-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:11:22.861513   33971 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:11:22.861656   33971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:11:22.861666   33971 out.go:304] Setting ErrFile to fd 2...
	I0416 00:11:22.861673   33971 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:11:22.861897   33971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:11:22.862281   33971 out.go:298] Setting JSON to false
	I0416 00:11:22.862319   33971 mustload.go:65] Loading cluster: ha-694782
	I0416 00:11:22.863250   33971 notify.go:220] Checking for updates...
	I0416 00:11:22.863423   33971 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:11:22.863448   33971 status.go:255] checking status of ha-694782 ...
	I0416 00:11:22.864357   33971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:11:22.864412   33971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:11:22.880174   33971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46409
	I0416 00:11:22.880505   33971 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:11:22.881084   33971 main.go:141] libmachine: Using API Version  1
	I0416 00:11:22.881112   33971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:11:22.881447   33971 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:11:22.881623   33971 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0416 00:11:22.882986   33971 status.go:330] ha-694782 host status = "Running" (err=<nil>)
	I0416 00:11:22.883002   33971 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:11:22.883320   33971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:11:22.883356   33971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:11:22.897685   33971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46721
	I0416 00:11:22.898024   33971 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:11:22.898514   33971 main.go:141] libmachine: Using API Version  1
	I0416 00:11:22.898564   33971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:11:22.898872   33971 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:11:22.899077   33971 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0416 00:11:22.901658   33971 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:11:22.902041   33971 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:11:22.902074   33971 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:11:22.902211   33971 host.go:66] Checking if "ha-694782" exists ...
	I0416 00:11:22.902495   33971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:11:22.902533   33971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:11:22.916209   33971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34541
	I0416 00:11:22.916604   33971 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:11:22.917025   33971 main.go:141] libmachine: Using API Version  1
	I0416 00:11:22.917046   33971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:11:22.917372   33971 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:11:22.917546   33971 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:11:22.917750   33971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:11:22.917777   33971 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:11:22.920473   33971 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:11:22.920953   33971 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:11:22.920985   33971 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:11:22.921068   33971 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:11:22.921266   33971 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:11:22.921426   33971 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:11:22.921582   33971 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0416 00:11:23.010807   33971 ssh_runner.go:195] Run: systemctl --version
	I0416 00:11:23.018321   33971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:11:23.039297   33971 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:11:23.039329   33971 api_server.go:166] Checking apiserver status ...
	I0416 00:11:23.039360   33971 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:11:23.056381   33971 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5053/cgroup
	W0416 00:11:23.066863   33971 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5053/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:11:23.066910   33971 ssh_runner.go:195] Run: ls
	I0416 00:11:23.072627   33971 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:11:23.077153   33971 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:11:23.077187   33971 status.go:422] ha-694782 apiserver status = Running (err=<nil>)
	I0416 00:11:23.077199   33971 status.go:257] ha-694782 status: &{Name:ha-694782 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:11:23.077228   33971 status.go:255] checking status of ha-694782-m02 ...
	I0416 00:11:23.077589   33971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:11:23.077636   33971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:11:23.092122   33971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35541
	I0416 00:11:23.092542   33971 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:11:23.093064   33971 main.go:141] libmachine: Using API Version  1
	I0416 00:11:23.093085   33971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:11:23.093385   33971 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:11:23.093563   33971 main.go:141] libmachine: (ha-694782-m02) Calling .GetState
	I0416 00:11:23.095168   33971 status.go:330] ha-694782-m02 host status = "Running" (err=<nil>)
	I0416 00:11:23.095186   33971 host.go:66] Checking if "ha-694782-m02" exists ...
	I0416 00:11:23.095464   33971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:11:23.095503   33971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:11:23.110264   33971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37965
	I0416 00:11:23.110656   33971 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:11:23.111132   33971 main.go:141] libmachine: Using API Version  1
	I0416 00:11:23.111165   33971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:11:23.111501   33971 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:11:23.111699   33971 main.go:141] libmachine: (ha-694782-m02) Calling .GetIP
	I0416 00:11:23.114528   33971 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:11:23.114917   33971 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 01:06:46 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0416 00:11:23.114957   33971 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:11:23.115045   33971 host.go:66] Checking if "ha-694782-m02" exists ...
	I0416 00:11:23.115333   33971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:11:23.115366   33971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:11:23.130319   33971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34061
	I0416 00:11:23.130797   33971 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:11:23.131241   33971 main.go:141] libmachine: Using API Version  1
	I0416 00:11:23.131261   33971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:11:23.131589   33971 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:11:23.131799   33971 main.go:141] libmachine: (ha-694782-m02) Calling .DriverName
	I0416 00:11:23.131999   33971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:11:23.132023   33971 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHHostname
	I0416 00:11:23.135072   33971 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:11:23.135471   33971 main.go:141] libmachine: (ha-694782-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:70:e2:c3", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 01:06:46 +0000 UTC Type:0 Mac:52:54:00:70:e2:c3 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ha-694782-m02 Clientid:01:52:54:00:70:e2:c3}
	I0416 00:11:23.135497   33971 main.go:141] libmachine: (ha-694782-m02) DBG | domain ha-694782-m02 has defined IP address 192.168.39.42 and MAC address 52:54:00:70:e2:c3 in network mk-ha-694782
	I0416 00:11:23.135613   33971 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHPort
	I0416 00:11:23.135729   33971 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHKeyPath
	I0416 00:11:23.135877   33971 main.go:141] libmachine: (ha-694782-m02) Calling .GetSSHUsername
	I0416 00:11:23.136023   33971 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m02/id_rsa Username:docker}
	I0416 00:11:23.222880   33971 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:11:23.242550   33971 kubeconfig.go:125] found "ha-694782" server: "https://192.168.39.254:8443"
	I0416 00:11:23.242581   33971 api_server.go:166] Checking apiserver status ...
	I0416 00:11:23.242620   33971 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:11:23.258417   33971 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1363/cgroup
	W0416 00:11:23.268537   33971 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1363/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:11:23.268605   33971 ssh_runner.go:195] Run: ls
	I0416 00:11:23.273630   33971 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0416 00:11:23.278214   33971 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0416 00:11:23.278239   33971 status.go:422] ha-694782-m02 apiserver status = Running (err=<nil>)
	I0416 00:11:23.278248   33971 status.go:257] ha-694782-m02 status: &{Name:ha-694782-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:11:23.278267   33971 status.go:255] checking status of ha-694782-m04 ...
	I0416 00:11:23.278641   33971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:11:23.278679   33971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:11:23.293404   33971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42819
	I0416 00:11:23.293790   33971 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:11:23.294307   33971 main.go:141] libmachine: Using API Version  1
	I0416 00:11:23.294328   33971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:11:23.294623   33971 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:11:23.294806   33971 main.go:141] libmachine: (ha-694782-m04) Calling .GetState
	I0416 00:11:23.296364   33971 status.go:330] ha-694782-m04 host status = "Running" (err=<nil>)
	I0416 00:11:23.296379   33971 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:11:23.296642   33971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:11:23.296670   33971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:11:23.311318   33971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46677
	I0416 00:11:23.311669   33971 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:11:23.312070   33971 main.go:141] libmachine: Using API Version  1
	I0416 00:11:23.312090   33971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:11:23.312381   33971 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:11:23.312577   33971 main.go:141] libmachine: (ha-694782-m04) Calling .GetIP
	I0416 00:11:23.315169   33971 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:11:23.315592   33971 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 01:08:46 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:11:23.315613   33971 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:11:23.315737   33971 host.go:66] Checking if "ha-694782-m04" exists ...
	I0416 00:11:23.316043   33971 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:11:23.316088   33971 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:11:23.330932   33971 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42619
	I0416 00:11:23.331460   33971 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:11:23.331925   33971 main.go:141] libmachine: Using API Version  1
	I0416 00:11:23.331943   33971 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:11:23.332256   33971 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:11:23.332437   33971 main.go:141] libmachine: (ha-694782-m04) Calling .DriverName
	I0416 00:11:23.332656   33971 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:11:23.332685   33971 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHHostname
	I0416 00:11:23.335252   33971 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:11:23.335654   33971 main.go:141] libmachine: (ha-694782-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:18:7d:b0", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 01:08:46 +0000 UTC Type:0 Mac:52:54:00:18:7d:b0 Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:ha-694782-m04 Clientid:01:52:54:00:18:7d:b0}
	I0416 00:11:23.335689   33971 main.go:141] libmachine: (ha-694782-m04) DBG | domain ha-694782-m04 has defined IP address 192.168.39.107 and MAC address 52:54:00:18:7d:b0 in network mk-ha-694782
	I0416 00:11:23.335808   33971 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHPort
	I0416 00:11:23.335975   33971 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHKeyPath
	I0416 00:11:23.336105   33971 main.go:141] libmachine: (ha-694782-m04) Calling .GetSSHUsername
	I0416 00:11:23.336243   33971 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782-m04/id_rsa Username:docker}
	W0416 00:11:41.785373   33971 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.107:22: connect: no route to host
	W0416 00:11:41.785466   33971 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.107:22: connect: no route to host
	E0416 00:11:41.785482   33971 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.107:22: connect: no route to host
	I0416 00:11:41.785492   33971 status.go:257] ha-694782-m04 status: &{Name:ha-694782-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0416 00:11:41.785510   33971 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.107:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-694782 -n ha-694782
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-694782 logs -n 25: (1.80393428s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	| ssh     | ha-694782 ssh -n ha-694782-m02 sudo cat                                          | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m03_ha-694782-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m03:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04:/home/docker/cp-test_ha-694782-m03_ha-694782-m04.txt               |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m03 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782-m04 sudo cat                                          | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m03_ha-694782-m04.txt                             |           |         |                |                     |                     |
	| cp      | ha-694782 cp testdata/cp-test.txt                                                | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04:/home/docker/cp-test.txt                                           |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile4178900617/001/cp-test_ha-694782-m04.txt |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782:/home/docker/cp-test_ha-694782-m04_ha-694782.txt                       |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782 sudo cat                                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m04_ha-694782.txt                                 |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m02:/home/docker/cp-test_ha-694782-m04_ha-694782-m02.txt               |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782-m02 sudo cat                                          | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m04_ha-694782-m02.txt                             |           |         |                |                     |                     |
	| cp      | ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt                              | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m03:/home/docker/cp-test_ha-694782-m04_ha-694782-m03.txt               |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n                                                                 | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | ha-694782-m04 sudo cat                                                           |           |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |                |                     |                     |
	| ssh     | ha-694782 ssh -n ha-694782-m03 sudo cat                                          | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC | 15 Apr 24 23:59 UTC |
	|         | /home/docker/cp-test_ha-694782-m04_ha-694782-m03.txt                             |           |         |                |                     |                     |
	| node    | ha-694782 node stop m02 -v=7                                                     | ha-694782 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:59 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | ha-694782 node start m02 -v=7                                                    | ha-694782 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-694782 -v=7                                                           | ha-694782 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | -p ha-694782 -v=7                                                                | ha-694782 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| start   | -p ha-694782 --wait=true -v=7                                                    | ha-694782 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:04 UTC | 16 Apr 24 00:09 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| node    | list -p ha-694782                                                                | ha-694782 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:09 UTC |                     |
	| node    | ha-694782 node delete m03 -v=7                                                   | ha-694782 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:09 UTC | 16 Apr 24 00:09 UTC |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	| stop    | ha-694782 stop -v=7                                                              | ha-694782 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:09 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |                |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 00:04:54
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 00:04:54.807417   31753 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:04:54.807572   31753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:04:54.807584   31753 out.go:304] Setting ErrFile to fd 2...
	I0416 00:04:54.807590   31753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:04:54.807788   31753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:04:54.808329   31753 out.go:298] Setting JSON to false
	I0416 00:04:54.809228   31753 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2839,"bootTime":1713223056,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 00:04:54.809286   31753 start.go:139] virtualization: kvm guest
	I0416 00:04:54.811503   31753 out.go:177] * [ha-694782] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 00:04:54.812707   31753 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 00:04:54.814066   31753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 00:04:54.812740   31753 notify.go:220] Checking for updates...
	I0416 00:04:54.816456   31753 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 00:04:54.817744   31753 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:04:54.819176   31753 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 00:04:54.820588   31753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 00:04:54.822324   31753 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:04:54.822435   31753 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 00:04:54.822850   31753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:04:54.822905   31753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:04:54.837073   31753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44757
	I0416 00:04:54.837563   31753 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:04:54.838143   31753 main.go:141] libmachine: Using API Version  1
	I0416 00:04:54.838164   31753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:04:54.838532   31753 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:04:54.838738   31753 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:04:54.871316   31753 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 00:04:54.872574   31753 start.go:297] selected driver: kvm2
	I0416 00:04:54.872587   31753 start.go:901] validating driver "kvm2" against &{Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.42 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.107 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:04:54.872730   31753 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 00:04:54.873019   31753 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:04:54.873073   31753 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 00:04:54.887722   31753 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 00:04:54.888528   31753 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 00:04:54.888598   31753 cni.go:84] Creating CNI manager for ""
	I0416 00:04:54.888613   31753 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0416 00:04:54.888677   31753 start.go:340] cluster config:
	{Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.42 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.107 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:04:54.888808   31753 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:04:54.891216   31753 out.go:177] * Starting "ha-694782" primary control-plane node in "ha-694782" cluster
	I0416 00:04:54.892323   31753 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 00:04:54.892356   31753 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 00:04:54.892368   31753 cache.go:56] Caching tarball of preloaded images
	I0416 00:04:54.892447   31753 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 00:04:54.892460   31753 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 00:04:54.892602   31753 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/config.json ...
	I0416 00:04:54.892796   31753 start.go:360] acquireMachinesLock for ha-694782: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:04:54.892848   31753 start.go:364] duration metric: took 34.063µs to acquireMachinesLock for "ha-694782"
	I0416 00:04:54.892869   31753 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:04:54.892886   31753 fix.go:54] fixHost starting: 
	I0416 00:04:54.893223   31753 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:04:54.893263   31753 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:04:54.906850   31753 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34659
	I0416 00:04:54.907187   31753 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:04:54.907627   31753 main.go:141] libmachine: Using API Version  1
	I0416 00:04:54.907647   31753 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:04:54.907909   31753 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:04:54.908136   31753 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:04:54.908284   31753 main.go:141] libmachine: (ha-694782) Calling .GetState
	I0416 00:04:54.909694   31753 fix.go:112] recreateIfNeeded on ha-694782: state=Running err=<nil>
	W0416 00:04:54.909722   31753 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:04:54.911499   31753 out.go:177] * Updating the running kvm2 "ha-694782" VM ...
	I0416 00:04:54.912784   31753 machine.go:94] provisionDockerMachine start ...
	I0416 00:04:54.912804   31753 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:04:54.912975   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:04:54.915096   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:54.915496   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:04:54.915526   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:54.915625   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:04:54.915779   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:04:54.915909   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:04:54.916016   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:04:54.916117   31753 main.go:141] libmachine: Using SSH client type: native
	I0416 00:04:54.916316   31753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0416 00:04:54.916329   31753 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:04:55.034364   31753 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-694782
	
	I0416 00:04:55.034400   31753 main.go:141] libmachine: (ha-694782) Calling .GetMachineName
	I0416 00:04:55.034647   31753 buildroot.go:166] provisioning hostname "ha-694782"
	I0416 00:04:55.034669   31753 main.go:141] libmachine: (ha-694782) Calling .GetMachineName
	I0416 00:04:55.034837   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:04:55.037483   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.037867   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:04:55.037890   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.038124   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:04:55.038291   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:04:55.038478   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:04:55.038650   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:04:55.038812   31753 main.go:141] libmachine: Using SSH client type: native
	I0416 00:04:55.039036   31753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0416 00:04:55.039053   31753 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-694782 && echo "ha-694782" | sudo tee /etc/hostname
	I0416 00:04:55.172006   31753 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-694782
	
	I0416 00:04:55.172032   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:04:55.174734   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.175136   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:04:55.175171   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.175370   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:04:55.175560   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:04:55.175693   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:04:55.175835   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:04:55.176006   31753 main.go:141] libmachine: Using SSH client type: native
	I0416 00:04:55.176213   31753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0416 00:04:55.176232   31753 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-694782' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-694782/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-694782' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:04:55.290147   31753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:04:55.290180   31753 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:04:55.290211   31753 buildroot.go:174] setting up certificates
	I0416 00:04:55.290219   31753 provision.go:84] configureAuth start
	I0416 00:04:55.290228   31753 main.go:141] libmachine: (ha-694782) Calling .GetMachineName
	I0416 00:04:55.290471   31753 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0416 00:04:55.293093   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.293461   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:04:55.293489   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.293618   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:04:55.295806   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.296142   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:04:55.296181   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.296307   31753 provision.go:143] copyHostCerts
	I0416 00:04:55.296337   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:04:55.296384   31753 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:04:55.296397   31753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:04:55.296475   31753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:04:55.296586   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:04:55.296613   31753 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:04:55.296622   31753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:04:55.296658   31753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:04:55.296731   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:04:55.296755   31753 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:04:55.296764   31753 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:04:55.296798   31753 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:04:55.296878   31753 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.ha-694782 san=[127.0.0.1 192.168.39.41 ha-694782 localhost minikube]
	I0416 00:04:55.464849   31753 provision.go:177] copyRemoteCerts
	I0416 00:04:55.464903   31753 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:04:55.464923   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:04:55.467721   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.468085   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:04:55.468116   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.468253   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:04:55.468440   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:04:55.468566   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:04:55.468700   31753 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0416 00:04:55.555647   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0416 00:04:55.555736   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:04:55.583491   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0416 00:04:55.583576   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0416 00:04:55.614128   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0416 00:04:55.614209   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 00:04:55.644199   31753 provision.go:87] duration metric: took 353.966861ms to configureAuth
	I0416 00:04:55.644228   31753 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:04:55.644487   31753 config.go:182] Loaded profile config "ha-694782": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:04:55.644561   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:04:55.647183   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.647530   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:04:55.647555   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:04:55.647752   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:04:55.647980   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:04:55.648191   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:04:55.648345   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:04:55.648522   31753 main.go:141] libmachine: Using SSH client type: native
	I0416 00:04:55.648716   31753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0416 00:04:55.648737   31753 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:06:26.451540   31753 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:06:26.451566   31753 machine.go:97] duration metric: took 1m31.538769373s to provisionDockerMachine
	I0416 00:06:26.451581   31753 start.go:293] postStartSetup for "ha-694782" (driver="kvm2")
	I0416 00:06:26.451593   31753 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:06:26.451606   31753 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:06:26.451946   31753 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:06:26.451973   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:06:26.455053   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.455532   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:06:26.455559   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.455695   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:06:26.455841   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:06:26.456013   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:06:26.456137   31753 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0416 00:06:26.544716   31753 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:06:26.548847   31753 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:06:26.548865   31753 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:06:26.548914   31753 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:06:26.548997   31753 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:06:26.549008   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /etc/ssl/certs/148972.pem
	I0416 00:06:26.549083   31753 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:06:26.558816   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:06:26.584188   31753 start.go:296] duration metric: took 132.595048ms for postStartSetup
	I0416 00:06:26.584234   31753 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:06:26.584494   31753 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0416 00:06:26.584515   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:06:26.586939   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.587279   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:06:26.587308   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.587423   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:06:26.587588   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:06:26.587734   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:06:26.587940   31753 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	W0416 00:06:26.674851   31753 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0416 00:06:26.674904   31753 fix.go:56] duration metric: took 1m31.782025943s for fixHost
	I0416 00:06:26.674924   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:06:26.677495   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.677807   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:06:26.677835   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.677968   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:06:26.678166   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:06:26.678355   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:06:26.678501   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:06:26.678640   31753 main.go:141] libmachine: Using SSH client type: native
	I0416 00:06:26.678787   31753 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I0416 00:06:26.678796   31753 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 00:06:26.793908   31753 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713225986.763318056
	
	I0416 00:06:26.793933   31753 fix.go:216] guest clock: 1713225986.763318056
	I0416 00:06:26.793940   31753 fix.go:229] Guest: 2024-04-16 00:06:26.763318056 +0000 UTC Remote: 2024-04-16 00:06:26.674911471 +0000 UTC m=+91.914140071 (delta=88.406585ms)
	I0416 00:06:26.793987   31753 fix.go:200] guest clock delta is within tolerance: 88.406585ms
	I0416 00:06:26.793996   31753 start.go:83] releasing machines lock for "ha-694782", held for 1m31.901135118s
	I0416 00:06:26.794018   31753 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:06:26.794225   31753 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0416 00:06:26.796875   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.797276   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:06:26.797294   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.797482   31753 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:06:26.797949   31753 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:06:26.798119   31753 main.go:141] libmachine: (ha-694782) Calling .DriverName
	I0416 00:06:26.798206   31753 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:06:26.798243   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:06:26.798301   31753 ssh_runner.go:195] Run: cat /version.json
	I0416 00:06:26.798325   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHHostname
	I0416 00:06:26.800720   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.800973   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.801088   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:06:26.801118   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.801259   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:06:26.801407   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:06:26.801425   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:06:26.801432   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:26.801584   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHPort
	I0416 00:06:26.801738   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:06:26.801740   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHKeyPath
	I0416 00:06:26.801888   31753 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0416 00:06:26.801955   31753 main.go:141] libmachine: (ha-694782) Calling .GetSSHUsername
	I0416 00:06:26.802079   31753 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/ha-694782/id_rsa Username:docker}
	I0416 00:06:26.915967   31753 ssh_runner.go:195] Run: systemctl --version
	I0416 00:06:26.922503   31753 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:06:27.088263   31753 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 00:06:27.095257   31753 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:06:27.095319   31753 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:06:27.104578   31753 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0416 00:06:27.104599   31753 start.go:494] detecting cgroup driver to use...
	I0416 00:06:27.104655   31753 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:06:27.122854   31753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:06:27.136982   31753 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:06:27.137040   31753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:06:27.150356   31753 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:06:27.163503   31753 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:06:27.321079   31753 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 00:06:27.474913   31753 docker.go:233] disabling docker service ...
	I0416 00:06:27.474992   31753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 00:06:27.493116   31753 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 00:06:27.508239   31753 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 00:06:27.660093   31753 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 00:06:27.812356   31753 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 00:06:27.826319   31753 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 00:06:27.846781   31753 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 00:06:27.846841   31753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:06:27.857571   31753 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 00:06:27.857620   31753 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:06:27.867731   31753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:06:27.877749   31753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:06:27.888673   31753 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 00:06:27.899177   31753 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:06:27.909720   31753 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:06:27.922116   31753 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:06:27.932879   31753 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 00:06:27.942369   31753 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 00:06:27.951866   31753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:06:28.095931   31753 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 00:06:33.900249   31753 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.804287471s)
	I0416 00:06:33.900288   31753 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 00:06:33.900342   31753 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 00:06:33.905375   31753 start.go:562] Will wait 60s for crictl version
	I0416 00:06:33.905430   31753 ssh_runner.go:195] Run: which crictl
	I0416 00:06:33.909251   31753 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 00:06:33.948496   31753 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 00:06:33.948567   31753 ssh_runner.go:195] Run: crio --version
	I0416 00:06:33.978383   31753 ssh_runner.go:195] Run: crio --version
	I0416 00:06:34.009763   31753 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 00:06:34.011090   31753 main.go:141] libmachine: (ha-694782) Calling .GetIP
	I0416 00:06:34.013809   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:34.014185   31753 main.go:141] libmachine: (ha-694782) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b4:cb:f8", ip: ""} in network mk-ha-694782: {Iface:virbr1 ExpiryTime:2024-04-16 00:55:05 +0000 UTC Type:0 Mac:52:54:00:b4:cb:f8 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:ha-694782 Clientid:01:52:54:00:b4:cb:f8}
	I0416 00:06:34.014212   31753 main.go:141] libmachine: (ha-694782) DBG | domain ha-694782 has defined IP address 192.168.39.41 and MAC address 52:54:00:b4:cb:f8 in network mk-ha-694782
	I0416 00:06:34.014428   31753 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 00:06:34.019233   31753 kubeadm.go:877] updating cluster {Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Cl
usterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.42 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.107 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 00:06:34.019358   31753 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 00:06:34.019410   31753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:06:34.065997   31753 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 00:06:34.066016   31753 crio.go:433] Images already preloaded, skipping extraction
	I0416 00:06:34.066068   31753 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:06:34.100404   31753 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 00:06:34.100421   31753 cache_images.go:84] Images are preloaded, skipping loading
	I0416 00:06:34.100429   31753 kubeadm.go:928] updating node { 192.168.39.41 8443 v1.29.3 crio true true} ...
	I0416 00:06:34.100516   31753 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-694782 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 00:06:34.100575   31753 ssh_runner.go:195] Run: crio config
	I0416 00:06:34.147863   31753 cni.go:84] Creating CNI manager for ""
	I0416 00:06:34.147893   31753 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0416 00:06:34.147902   31753 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 00:06:34.147921   31753 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.41 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-694782 NodeName:ha-694782 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 00:06:34.148047   31753 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-694782"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.41
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.41"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 00:06:34.148069   31753 kube-vip.go:111] generating kube-vip config ...
	I0416 00:06:34.148109   31753 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0416 00:06:34.160356   31753 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0416 00:06:34.160456   31753 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0416 00:06:34.160505   31753 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 00:06:34.170046   31753 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 00:06:34.170105   31753 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0416 00:06:34.180299   31753 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0416 00:06:34.197054   31753 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 00:06:34.214106   31753 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0416 00:06:34.231822   31753 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0416 00:06:34.250349   31753 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0416 00:06:34.254629   31753 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:06:34.401374   31753 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 00:06:34.416310   31753 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782 for IP: 192.168.39.41
	I0416 00:06:34.416328   31753 certs.go:194] generating shared ca certs ...
	I0416 00:06:34.416347   31753 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:06:34.416538   31753 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 00:06:34.416595   31753 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 00:06:34.416609   31753 certs.go:256] generating profile certs ...
	I0416 00:06:34.416685   31753 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/client.key
	I0416 00:06:34.416719   31753 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.b4243877
	I0416 00:06:34.416743   31753 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.b4243877 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.41 192.168.39.42 192.168.39.202 192.168.39.254]
	I0416 00:06:34.484701   31753 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.b4243877 ...
	I0416 00:06:34.484739   31753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.b4243877: {Name:mkb0baec2c01d8c82f7217ea6fcb92d550314c3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:06:34.484925   31753 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.b4243877 ...
	I0416 00:06:34.484947   31753 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.b4243877: {Name:mkfaa52d47977a253daa734467f979e5cee152ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:06:34.485040   31753 certs.go:381] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt.b4243877 -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt
	I0416 00:06:34.485231   31753 certs.go:385] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key.b4243877 -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key
	I0416 00:06:34.485407   31753 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key
	I0416 00:06:34.485426   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 00:06:34.485444   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0416 00:06:34.485469   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 00:06:34.485497   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 00:06:34.485516   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 00:06:34.485534   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 00:06:34.485556   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 00:06:34.485571   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 00:06:34.485634   31753 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 00:06:34.485678   31753 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 00:06:34.485691   31753 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 00:06:34.485722   31753 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 00:06:34.485753   31753 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 00:06:34.485784   31753 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 00:06:34.485846   31753 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:06:34.485883   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:06:34.485903   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem -> /usr/share/ca-certificates/14897.pem
	I0416 00:06:34.485921   31753 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /usr/share/ca-certificates/148972.pem
	I0416 00:06:34.486550   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 00:06:34.515086   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 00:06:34.539901   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 00:06:34.564024   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 00:06:34.588241   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0416 00:06:34.614591   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 00:06:34.641748   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 00:06:34.667735   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/ha-694782/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 00:06:34.693132   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 00:06:34.718199   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 00:06:34.743208   31753 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 00:06:34.769137   31753 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 00:06:34.787125   31753 ssh_runner.go:195] Run: openssl version
	I0416 00:06:34.793338   31753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 00:06:34.804574   31753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 00:06:34.809228   31753 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 00:06:34.810908   31753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 00:06:34.816840   31753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 00:06:34.826627   31753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 00:06:34.837683   31753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:06:34.842293   31753 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:06:34.842345   31753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:06:34.848075   31753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 00:06:34.861943   31753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 00:06:34.877504   31753 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 00:06:34.882127   31753 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 00:06:34.882173   31753 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 00:06:34.887897   31753 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 00:06:34.898056   31753 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 00:06:34.902635   31753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 00:06:34.908357   31753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 00:06:34.914115   31753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 00:06:34.919986   31753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 00:06:34.925930   31753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 00:06:34.931802   31753 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 00:06:34.937571   31753 kubeadm.go:391] StartCluster: {Name:ha-694782 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 Clust
erName:ha-694782 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.42 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.202 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.107 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:06:34.937694   31753 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 00:06:34.937756   31753 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:06:34.982315   31753 cri.go:89] found id: "ca69ec229237f7c2dc9419d3b53e88f4383563c5ad1fc83604d6f4430c5b603c"
	I0416 00:06:34.982346   31753 cri.go:89] found id: "7eccbdc2e01dcc65a90d218b473d663e277b7a46b0958abdd3ba20c433158705"
	I0416 00:06:34.982350   31753 cri.go:89] found id: "02481da89a7ad84df32e1762276a5ead48145e5ce75080ed1b392d2b2a7d445a"
	I0416 00:06:34.982353   31753 cri.go:89] found id: "b4afee0fee237dddcd3e9aaba26c488c038e8b1cfa73874e245e6dbcdf46e387"
	I0416 00:06:34.982355   31753 cri.go:89] found id: "b3559e8687b295e6fdcb60e5e4050bd11f66bbe0eadc1cb26920a345a5ac4764"
	I0416 00:06:34.982359   31753 cri.go:89] found id: "a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d"
	I0416 00:06:34.982361   31753 cri.go:89] found id: "b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2"
	I0416 00:06:34.982363   31753 cri.go:89] found id: "b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7"
	I0416 00:06:34.982369   31753 cri.go:89] found id: "9f8c32adffdfe920d33a3d2aadb4e5d70c83321d5e0ed04b5e651b3338f8868c"
	I0416 00:06:34.982374   31753 cri.go:89] found id: "9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5"
	I0416 00:06:34.982377   31753 cri.go:89] found id: "7d4ea2215ec6217956d87feb4c68ad8ace3136456a7bc720dcc7c721b87f66f4"
	I0416 00:06:34.982379   31753 cri.go:89] found id: "553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf"
	I0416 00:06:34.982382   31753 cri.go:89] found id: "8a682dce5ef12dd6c80bdebd8ef67c034ebd1c88d5e144fc177805ad5eb35efe"
	I0416 00:06:34.982384   31753 cri.go:89] found id: ""
	I0416 00:06:34.982431   31753 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.457301822Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713226302457276978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91368801-ebd2-4cc4-ab6e-5ed44483b9c7 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.457833979Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=abc56327-00ad-49b2-a0ff-eaf807c504c0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.457914796Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=abc56327-00ad-49b2-a0ff-eaf807c504c0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.459430592Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ab5ccb1b4f0e42bd80678c58d3330ba78f59d6acdea375ddb8487b94a3e557,PodSandboxId:16a1fcad619fb9858cf95428849d64e4b460a64124801b7c095f3dd616487edf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713226058599295462,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6b524648067e74a7474e617616cbf07b5cfa3884641ba982cdebdcb006d1b1,PodSandboxId:cdde1b0476e6b45e76f6c17e08a42915034ff2a2f13f84edd1d92298383a2f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713226044612150370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc7945e20ade550a66bb998ef22646661865ad45284f188ba6e1587fb32a4e40,PodSandboxId:f1a7ff92a40ef7366733dae31aa2e8b15f0ca073c2394c0594fab15f62dd18c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713226043600969243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f77c6b5d4b2208fa6eea38d1d96605adfcd20dd6066bc818ecf7fbfd5ce64a4,PodSandboxId:0e9faf34ef3728d1ac4db2d248157df5089aa59030cac0473d3269344359b179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713226040598038651,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fbcafa65cad837018bf187403310acf5ab60914a8243b0fd7d1f11db5749bd8,PodSandboxId:405c8eb6fa3ddbc512c8b2a9149eab41316500a745f750fb93e6bce5c1fcf398,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713226035006122203,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kubernetes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ece67cb0b89e483776ea5bea240e0ad3b4df15e4f3a9e4304627cf09c9fb73e,PodSandboxId:59855c2bfcb6823485252892145180c20c60375407f722d38125b292c621593e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713226018272055529,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11d49cc4234c7987e40c6a010ebfc82b,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:6cbda3b869057546eceda5804afdc7ab4b4ae44d5b847cbab9b96c00cf8783c9,PodSandboxId:f1a7ff92a40ef7366733dae31aa2e8b15f0ca073c2394c0594fab15f62dd18c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713226001397069475,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:3c5fc74cb4adad4166f2600954b0cdf2132c9dfab6bcebaa4ca82a828c264cc9,PodSandboxId:77394139e635e45b1747011d9ef79e2bfa982d467132852bf67c413000543289,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713226001455371861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a812f
61cab74eac095d0eaf25f044bcd3c0b40542f648850ed3358c4a9cf07,PodSandboxId:b88cd0e11b61fcb7e273bbae9b9f813e3e9a633a535dab171fd701ed953a2d7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713226001471393373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb07b1996f5bc461dcd7a9620a8
1b1e6c8ba652def7f66ead633fda77b4af08,PodSandboxId:633279be0125a119026ccc5953e99f34930a10d0a33d756cef37e4121d3a58f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713226001354706944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cecd296d2e7c995524a52118c632cabb6ea81bed6f1db33f69f2986ef86204d,PodSandboxId:c
dde1b0476e6b45e76f6c17e08a42915034ff2a2f13f84edd1d92298383a2f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713226001256489698,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b6c0ffbd95d979207c1b762faf871abef4bb7cf28b4e6d08021db0172f9168,PodSandbo
xId:0e9faf34ef3728d1ac4db2d248157df5089aa59030cac0473d3269344359b179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713226001225156029,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5916fb4a53a136bf90f5630e11363bf461e58fe804cb1e286d54ad6d31f93c96,PodSandboxId:16a1fcad619fb9858c
f95428849d64e4b460a64124801b7c095f3dd616487edf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713225996694034889,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca994edd19ee7bf77418af14b91ce946c17574b835c117e41bfd965b7c92ac5,PodSandboxId:0ec9ed06a0fb4e24a1215d14da18e280badfceceb20bcbcb8b905
ed30afe614a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225996450208724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64f8280e0c729e39a8b27c9117de8b73b1b7d2999de6a4769f1c577176f16f8,PodSandboxId:a371dbf543077fc460f104739a29dc30a26b86c08db4838ebda547bf3d5b5d72,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225996397214193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10abaa8fc3a416f4f6e6af525fcc65e0613ea769d731660a81e4e6a425fa4d6c,PodSandboxId:df7bc8cc3af912521d7dab8c802c0b04f7447ccb3d192040071875ff6a6ed89d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713225502012625031,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kuber
netes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d,PodSandboxId:773aba8a13222bacf0c0e79c78ec31764b5af16b9bc416140f303b36465cce2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713225349861459156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2,PodSandboxId:cc571f90808ddcdef413b709640e27f67d9d861628a9d232886db9a496a57712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713225349825077174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7,PodSandboxId:f34915e87e4008b765d7b34d6619b29c22eddc157e2e96893518ff9709538560,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713225346210711538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5,PodSandboxId:41e04a0d8a0ba492c448f0c8d919cb86eb887cc0a8198d99815e7f7eed50b944,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713225326796884514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf,PodSandboxId:cc8f87bd6e0dc433462a51cd028d6d774aa14a6c762f3f6a79999daea3870547,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1713225326704620344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=abc56327-00ad-49b2-a0ff-eaf807c504c0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.514251003Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b406b8d9-c5cd-47e1-b2e6-e043c9071b77 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.514366920Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b406b8d9-c5cd-47e1-b2e6-e043c9071b77 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.515510246Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fdc2fed6-1df2-405e-a317-f86a5cfe0ff4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.515937204Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713226302515912555,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fdc2fed6-1df2-405e-a317-f86a5cfe0ff4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.516687161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b7b22f0-27c4-4243-92cb-fb55d70ba708 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.516761622Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b7b22f0-27c4-4243-92cb-fb55d70ba708 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.517219335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ab5ccb1b4f0e42bd80678c58d3330ba78f59d6acdea375ddb8487b94a3e557,PodSandboxId:16a1fcad619fb9858cf95428849d64e4b460a64124801b7c095f3dd616487edf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713226058599295462,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6b524648067e74a7474e617616cbf07b5cfa3884641ba982cdebdcb006d1b1,PodSandboxId:cdde1b0476e6b45e76f6c17e08a42915034ff2a2f13f84edd1d92298383a2f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713226044612150370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc7945e20ade550a66bb998ef22646661865ad45284f188ba6e1587fb32a4e40,PodSandboxId:f1a7ff92a40ef7366733dae31aa2e8b15f0ca073c2394c0594fab15f62dd18c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713226043600969243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f77c6b5d4b2208fa6eea38d1d96605adfcd20dd6066bc818ecf7fbfd5ce64a4,PodSandboxId:0e9faf34ef3728d1ac4db2d248157df5089aa59030cac0473d3269344359b179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713226040598038651,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fbcafa65cad837018bf187403310acf5ab60914a8243b0fd7d1f11db5749bd8,PodSandboxId:405c8eb6fa3ddbc512c8b2a9149eab41316500a745f750fb93e6bce5c1fcf398,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713226035006122203,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kubernetes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ece67cb0b89e483776ea5bea240e0ad3b4df15e4f3a9e4304627cf09c9fb73e,PodSandboxId:59855c2bfcb6823485252892145180c20c60375407f722d38125b292c621593e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713226018272055529,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11d49cc4234c7987e40c6a010ebfc82b,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:6cbda3b869057546eceda5804afdc7ab4b4ae44d5b847cbab9b96c00cf8783c9,PodSandboxId:f1a7ff92a40ef7366733dae31aa2e8b15f0ca073c2394c0594fab15f62dd18c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713226001397069475,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:3c5fc74cb4adad4166f2600954b0cdf2132c9dfab6bcebaa4ca82a828c264cc9,PodSandboxId:77394139e635e45b1747011d9ef79e2bfa982d467132852bf67c413000543289,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713226001455371861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a812f
61cab74eac095d0eaf25f044bcd3c0b40542f648850ed3358c4a9cf07,PodSandboxId:b88cd0e11b61fcb7e273bbae9b9f813e3e9a633a535dab171fd701ed953a2d7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713226001471393373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb07b1996f5bc461dcd7a9620a8
1b1e6c8ba652def7f66ead633fda77b4af08,PodSandboxId:633279be0125a119026ccc5953e99f34930a10d0a33d756cef37e4121d3a58f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713226001354706944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cecd296d2e7c995524a52118c632cabb6ea81bed6f1db33f69f2986ef86204d,PodSandboxId:c
dde1b0476e6b45e76f6c17e08a42915034ff2a2f13f84edd1d92298383a2f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713226001256489698,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b6c0ffbd95d979207c1b762faf871abef4bb7cf28b4e6d08021db0172f9168,PodSandbo
xId:0e9faf34ef3728d1ac4db2d248157df5089aa59030cac0473d3269344359b179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713226001225156029,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5916fb4a53a136bf90f5630e11363bf461e58fe804cb1e286d54ad6d31f93c96,PodSandboxId:16a1fcad619fb9858c
f95428849d64e4b460a64124801b7c095f3dd616487edf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713225996694034889,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca994edd19ee7bf77418af14b91ce946c17574b835c117e41bfd965b7c92ac5,PodSandboxId:0ec9ed06a0fb4e24a1215d14da18e280badfceceb20bcbcb8b905
ed30afe614a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225996450208724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64f8280e0c729e39a8b27c9117de8b73b1b7d2999de6a4769f1c577176f16f8,PodSandboxId:a371dbf543077fc460f104739a29dc30a26b86c08db4838ebda547bf3d5b5d72,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225996397214193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10abaa8fc3a416f4f6e6af525fcc65e0613ea769d731660a81e4e6a425fa4d6c,PodSandboxId:df7bc8cc3af912521d7dab8c802c0b04f7447ccb3d192040071875ff6a6ed89d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713225502012625031,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kuber
netes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d,PodSandboxId:773aba8a13222bacf0c0e79c78ec31764b5af16b9bc416140f303b36465cce2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713225349861459156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2,PodSandboxId:cc571f90808ddcdef413b709640e27f67d9d861628a9d232886db9a496a57712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713225349825077174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7,PodSandboxId:f34915e87e4008b765d7b34d6619b29c22eddc157e2e96893518ff9709538560,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713225346210711538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5,PodSandboxId:41e04a0d8a0ba492c448f0c8d919cb86eb887cc0a8198d99815e7f7eed50b944,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713225326796884514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf,PodSandboxId:cc8f87bd6e0dc433462a51cd028d6d774aa14a6c762f3f6a79999daea3870547,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1713225326704620344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b7b22f0-27c4-4243-92cb-fb55d70ba708 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.563335687Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3885fc15-b239-4a25-a12b-38f27aa9e3f9 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.563542865Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3885fc15-b239-4a25-a12b-38f27aa9e3f9 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.565122577Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2933557a-9938-440d-aec6-01644c7a09de name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.565549106Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713226302565526249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2933557a-9938-440d-aec6-01644c7a09de name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.566687777Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4a26315-8cd8-489a-be2d-80f5627ecd26 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.566749843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4a26315-8cd8-489a-be2d-80f5627ecd26 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.567514330Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ab5ccb1b4f0e42bd80678c58d3330ba78f59d6acdea375ddb8487b94a3e557,PodSandboxId:16a1fcad619fb9858cf95428849d64e4b460a64124801b7c095f3dd616487edf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713226058599295462,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6b524648067e74a7474e617616cbf07b5cfa3884641ba982cdebdcb006d1b1,PodSandboxId:cdde1b0476e6b45e76f6c17e08a42915034ff2a2f13f84edd1d92298383a2f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713226044612150370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc7945e20ade550a66bb998ef22646661865ad45284f188ba6e1587fb32a4e40,PodSandboxId:f1a7ff92a40ef7366733dae31aa2e8b15f0ca073c2394c0594fab15f62dd18c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713226043600969243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f77c6b5d4b2208fa6eea38d1d96605adfcd20dd6066bc818ecf7fbfd5ce64a4,PodSandboxId:0e9faf34ef3728d1ac4db2d248157df5089aa59030cac0473d3269344359b179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713226040598038651,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fbcafa65cad837018bf187403310acf5ab60914a8243b0fd7d1f11db5749bd8,PodSandboxId:405c8eb6fa3ddbc512c8b2a9149eab41316500a745f750fb93e6bce5c1fcf398,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713226035006122203,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kubernetes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ece67cb0b89e483776ea5bea240e0ad3b4df15e4f3a9e4304627cf09c9fb73e,PodSandboxId:59855c2bfcb6823485252892145180c20c60375407f722d38125b292c621593e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713226018272055529,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11d49cc4234c7987e40c6a010ebfc82b,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:6cbda3b869057546eceda5804afdc7ab4b4ae44d5b847cbab9b96c00cf8783c9,PodSandboxId:f1a7ff92a40ef7366733dae31aa2e8b15f0ca073c2394c0594fab15f62dd18c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713226001397069475,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:3c5fc74cb4adad4166f2600954b0cdf2132c9dfab6bcebaa4ca82a828c264cc9,PodSandboxId:77394139e635e45b1747011d9ef79e2bfa982d467132852bf67c413000543289,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713226001455371861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a812f
61cab74eac095d0eaf25f044bcd3c0b40542f648850ed3358c4a9cf07,PodSandboxId:b88cd0e11b61fcb7e273bbae9b9f813e3e9a633a535dab171fd701ed953a2d7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713226001471393373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb07b1996f5bc461dcd7a9620a8
1b1e6c8ba652def7f66ead633fda77b4af08,PodSandboxId:633279be0125a119026ccc5953e99f34930a10d0a33d756cef37e4121d3a58f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713226001354706944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cecd296d2e7c995524a52118c632cabb6ea81bed6f1db33f69f2986ef86204d,PodSandboxId:c
dde1b0476e6b45e76f6c17e08a42915034ff2a2f13f84edd1d92298383a2f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713226001256489698,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b6c0ffbd95d979207c1b762faf871abef4bb7cf28b4e6d08021db0172f9168,PodSandbo
xId:0e9faf34ef3728d1ac4db2d248157df5089aa59030cac0473d3269344359b179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713226001225156029,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5916fb4a53a136bf90f5630e11363bf461e58fe804cb1e286d54ad6d31f93c96,PodSandboxId:16a1fcad619fb9858c
f95428849d64e4b460a64124801b7c095f3dd616487edf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713225996694034889,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca994edd19ee7bf77418af14b91ce946c17574b835c117e41bfd965b7c92ac5,PodSandboxId:0ec9ed06a0fb4e24a1215d14da18e280badfceceb20bcbcb8b905
ed30afe614a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225996450208724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64f8280e0c729e39a8b27c9117de8b73b1b7d2999de6a4769f1c577176f16f8,PodSandboxId:a371dbf543077fc460f104739a29dc30a26b86c08db4838ebda547bf3d5b5d72,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225996397214193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10abaa8fc3a416f4f6e6af525fcc65e0613ea769d731660a81e4e6a425fa4d6c,PodSandboxId:df7bc8cc3af912521d7dab8c802c0b04f7447ccb3d192040071875ff6a6ed89d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713225502012625031,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kuber
netes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d,PodSandboxId:773aba8a13222bacf0c0e79c78ec31764b5af16b9bc416140f303b36465cce2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713225349861459156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2,PodSandboxId:cc571f90808ddcdef413b709640e27f67d9d861628a9d232886db9a496a57712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713225349825077174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7,PodSandboxId:f34915e87e4008b765d7b34d6619b29c22eddc157e2e96893518ff9709538560,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713225346210711538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5,PodSandboxId:41e04a0d8a0ba492c448f0c8d919cb86eb887cc0a8198d99815e7f7eed50b944,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713225326796884514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf,PodSandboxId:cc8f87bd6e0dc433462a51cd028d6d774aa14a6c762f3f6a79999daea3870547,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1713225326704620344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e4a26315-8cd8-489a-be2d-80f5627ecd26 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.614582331Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b905ac4-cf12-40e6-9b27-796d2e9d8421 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.614654497Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b905ac4-cf12-40e6-9b27-796d2e9d8421 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.616124795Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=641ba411-45b5-4b42-b24a-4fe40a611a62 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.616541177Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713226302616517280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:141828,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=641ba411-45b5-4b42-b24a-4fe40a611a62 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.617196490Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9a24261-4cab-4718-b30e-362de7306c7e name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.617261408Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9a24261-4cab-4718-b30e-362de7306c7e name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:11:42 ha-694782 crio[3829]: time="2024-04-16 00:11:42.617720939Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:39ab5ccb1b4f0e42bd80678c58d3330ba78f59d6acdea375ddb8487b94a3e557,PodSandboxId:16a1fcad619fb9858cf95428849d64e4b460a64124801b7c095f3dd616487edf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713226058599295462,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad6b524648067e74a7474e617616cbf07b5cfa3884641ba982cdebdcb006d1b1,PodSandboxId:cdde1b0476e6b45e76f6c17e08a42915034ff2a2f13f84edd1d92298383a2f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713226044612150370,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc7945e20ade550a66bb998ef22646661865ad45284f188ba6e1587fb32a4e40,PodSandboxId:f1a7ff92a40ef7366733dae31aa2e8b15f0ca073c2394c0594fab15f62dd18c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713226043600969243,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f77c6b5d4b2208fa6eea38d1d96605adfcd20dd6066bc818ecf7fbfd5ce64a4,PodSandboxId:0e9faf34ef3728d1ac4db2d248157df5089aa59030cac0473d3269344359b179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713226040598038651,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6fbcafa65cad837018bf187403310acf5ab60914a8243b0fd7d1f11db5749bd8,PodSandboxId:405c8eb6fa3ddbc512c8b2a9149eab41316500a745f750fb93e6bce5c1fcf398,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713226035006122203,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kubernetes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ece67cb0b89e483776ea5bea240e0ad3b4df15e4f3a9e4304627cf09c9fb73e,PodSandboxId:59855c2bfcb6823485252892145180c20c60375407f722d38125b292c621593e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1713226018272055529,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11d49cc4234c7987e40c6a010ebfc82b,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:6cbda3b869057546eceda5804afdc7ab4b4ae44d5b847cbab9b96c00cf8783c9,PodSandboxId:f1a7ff92a40ef7366733dae31aa2e8b15f0ca073c2394c0594fab15f62dd18c3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713226001397069475,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bea9c166-5f83-473f-8f01-335ea1436dad,},Annotations:map[string]string{io.kubernetes.container.hash: 26b87359,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:3c5fc74cb4adad4166f2600954b0cdf2132c9dfab6bcebaa4ca82a828c264cc9,PodSandboxId:77394139e635e45b1747011d9ef79e2bfa982d467132852bf67c413000543289,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713226001455371861,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1a812f
61cab74eac095d0eaf25f044bcd3c0b40542f648850ed3358c4a9cf07,PodSandboxId:b88cd0e11b61fcb7e273bbae9b9f813e3e9a633a535dab171fd701ed953a2d7e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713226001471393373,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8fb07b1996f5bc461dcd7a9620a8
1b1e6c8ba652def7f66ead633fda77b4af08,PodSandboxId:633279be0125a119026ccc5953e99f34930a10d0a33d756cef37e4121d3a58f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713226001354706944,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cecd296d2e7c995524a52118c632cabb6ea81bed6f1db33f69f2986ef86204d,PodSandboxId:c
dde1b0476e6b45e76f6c17e08a42915034ff2a2f13f84edd1d92298383a2f55,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713226001256489698,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b443ba5c534abe08b64f6dcd05be16a,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2b6c0ffbd95d979207c1b762faf871abef4bb7cf28b4e6d08021db0172f9168,PodSandbo
xId:0e9faf34ef3728d1ac4db2d248157df5089aa59030cac0473d3269344359b179,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713226001225156029,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a733f3b6fc63c6f5e84f944f7d76e1a4,},Annotations:map[string]string{io.kubernetes.container.hash: 6258141c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5916fb4a53a136bf90f5630e11363bf461e58fe804cb1e286d54ad6d31f93c96,PodSandboxId:16a1fcad619fb9858c
f95428849d64e4b460a64124801b7c095f3dd616487edf,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713225996694034889,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-99cs7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b3bc7e7-fd85-4dc7-ba53-c74fe0d213e3,},Annotations:map[string]string{io.kubernetes.container.hash: e6fad754,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ca994edd19ee7bf77418af14b91ce946c17574b835c117e41bfd965b7c92ac5,PodSandboxId:0ec9ed06a0fb4e24a1215d14da18e280badfceceb20bcbcb8b905
ed30afe614a,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225996450208724,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File
,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a64f8280e0c729e39a8b27c9117de8b73b1b7d2999de6a4769f1c577176f16f8,PodSandboxId:a371dbf543077fc460f104739a29dc30a26b86c08db4838ebda547bf3d5b5d72,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713225996397214193,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPor
t\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10abaa8fc3a416f4f6e6af525fcc65e0613ea769d731660a81e4e6a425fa4d6c,PodSandboxId:df7bc8cc3af912521d7dab8c802c0b04f7447ccb3d192040071875ff6a6ed89d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713225502012625031,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-vsvrq,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d510538f-3535-428b-8933-e3d6de6777eb,},Annotations:map[string]string{io.kuber
netes.container.hash: 83ddc528,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d,PodSandboxId:773aba8a13222bacf0c0e79c78ec31764b5af16b9bc416140f303b36465cce2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713225349861459156,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zdc8q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a7e1a29-8c75-4d1f-978b-471ac0adb888,},Annotations:map[string]string{io.kubernetes.container.hash: e9e68e98,i
o.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2,PodSandboxId:cc571f90808ddcdef413b709640e27f67d9d861628a9d232886db9a496a57712,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713225349825077174,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: core
dns-76f75df574-4sgv4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c1f65c0-37b2-4c88-879b-68297e989d44,},Annotations:map[string]string{io.kubernetes.container.hash: 2558243c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7,PodSandboxId:f34915e87e4008b765d7b34d6619b29c22eddc157e2e96893518ff9709538560,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b
0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713225346210711538,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-d46v5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c92235e6-1639-45c0-a92b-bf0cc32bea22,},Annotations:map[string]string{io.kubernetes.container.hash: f515a84d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5,PodSandboxId:41e04a0d8a0ba492c448f0c8d919cb86eb887cc0a8198d99815e7f7eed50b944,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83
d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713225326796884514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d68ab2950732b234de6161a8265b14cc,},Annotations:map[string]string{io.kubernetes.container.hash: 94537991,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf,PodSandboxId:cc8f87bd6e0dc433462a51cd028d6d774aa14a6c762f3f6a79999daea3870547,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,Create
dAt:1713225326704620344,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-694782,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d60a0238d152f42b26bd8630ed822b52,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9a24261-4cab-4718-b30e-362de7306c7e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	39ab5ccb1b4f0       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               3                   16a1fcad619fb       kindnet-99cs7
	ad6b524648067       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      4 minutes ago       Running             kube-controller-manager   2                   cdde1b0476e6b       kube-controller-manager-ha-694782
	fc7945e20ade5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   f1a7ff92a40ef       storage-provisioner
	5f77c6b5d4b22       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      4 minutes ago       Running             kube-apiserver            3                   0e9faf34ef372       kube-apiserver-ha-694782
	6fbcafa65cad8       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   405c8eb6fa3dd       busybox-7fdf7869d9-vsvrq
	0ece67cb0b89e       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      4 minutes ago       Running             kube-vip                  0                   59855c2bfcb68       kube-vip-ha-694782
	f1a812f61cab7       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      5 minutes ago       Running             kube-scheduler            1                   b88cd0e11b61f       kube-scheduler-ha-694782
	3c5fc74cb4ada       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      5 minutes ago       Running             kube-proxy                1                   77394139e635e       kube-proxy-d46v5
	6cbda3b869057       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   f1a7ff92a40ef       storage-provisioner
	8fb07b1996f5b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   633279be0125a       etcd-ha-694782
	7cecd296d2e7c       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      5 minutes ago       Exited              kube-controller-manager   1                   cdde1b0476e6b       kube-controller-manager-ha-694782
	e2b6c0ffbd95d       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      5 minutes ago       Exited              kube-apiserver            2                   0e9faf34ef372       kube-apiserver-ha-694782
	5916fb4a53a13       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               2                   16a1fcad619fb       kindnet-99cs7
	1ca994edd19ee       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   0ec9ed06a0fb4       coredns-76f75df574-zdc8q
	a64f8280e0c72       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   a371dbf543077       coredns-76f75df574-4sgv4
	10abaa8fc3a41       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   df7bc8cc3af91       busybox-7fdf7869d9-vsvrq
	a62edf63e9633       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   773aba8a13222       coredns-76f75df574-zdc8q
	b3a501d70f72c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      15 minutes ago      Exited              coredns                   0                   cc571f90808dd       coredns-76f75df574-4sgv4
	b55cb00c20162       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      15 minutes ago      Exited              kube-proxy                0                   f34915e87e400       kube-proxy-d46v5
	9d17ec84664ef       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   41e04a0d8a0ba       etcd-ha-694782
	553d7f07f43e6       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      16 minutes ago      Exited              kube-scheduler            0                   cc8f87bd6e0dc       kube-scheduler-ha-694782
	
	
	==> coredns [1ca994edd19ee7bf77418af14b91ce946c17574b835c117e41bfd965b7c92ac5] <==
	[INFO] plugin/kubernetes: Trace[1816213601]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 00:06:44.306) (total time: 10000ms):
	Trace[1816213601]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (00:06:54.306)
	Trace[1816213601]: [10.000861027s] [10.000861027s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1551518901]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 00:06:44.929) (total time: 10001ms):
	Trace[1551518901]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:06:54.931)
	Trace[1551518901]: [10.001896549s] [10.001896549s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a62edf63e9633afa138049c4146dcf4b2f5135b1fc485fdc8071c8ee36b07a2d] <==
	[INFO] 10.244.0.4:58655 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000057875s
	[INFO] 10.244.1.2:57138 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000292175s
	[INFO] 10.244.1.2:42990 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.007959183s
	[INFO] 10.244.1.2:53242 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142606s
	[INFO] 10.244.1.2:53591 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169859s
	[INFO] 10.244.2.2:56926 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001802242s
	[INFO] 10.244.2.2:55053 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174333s
	[INFO] 10.244.2.2:56210 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000166019s
	[INFO] 10.244.2.2:36533 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001257882s
	[INFO] 10.244.0.4:39112 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000127586s
	[INFO] 10.244.0.4:33597 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001242421s
	[INFO] 10.244.0.4:37595 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130691s
	[INFO] 10.244.0.4:36939 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000030566s
	[INFO] 10.244.0.4:36468 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000043404s
	[INFO] 10.244.1.2:46854 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000237116s
	[INFO] 10.244.1.2:35618 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000139683s
	[INFO] 10.244.2.2:54137 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000211246s
	[INFO] 10.244.2.2:57833 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097841s
	[INFO] 10.244.0.4:45317 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099201s
	[INFO] 10.244.1.2:46870 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000160521s
	[INFO] 10.244.1.2:49971 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000118112s
	[INFO] 10.244.2.2:60977 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000163482s
	[INFO] 10.244.0.4:57367 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000078337s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a64f8280e0c729e39a8b27c9117de8b73b1b7d2999de6a4769f1c577176f16f8] <==
	Trace[1441084192]: [10.001191606s] [10.001191606s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[416207084]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (16-Apr-2024 00:06:46.533) (total time: 10001ms):
	Trace[416207084]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:06:56.534)
	Trace[416207084]: [10.001555902s] [10.001555902s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b3a501d70f72c9551b55ad858eaec6232180f6589a34825144a580391cdf53a2] <==
	[INFO] 10.244.2.2:55011 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111047s
	[INFO] 10.244.2.2:60878 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096803s
	[INFO] 10.244.2.2:40329 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000153524s
	[INFO] 10.244.2.2:43908 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109424s
	[INFO] 10.244.0.4:40588 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117575s
	[INFO] 10.244.0.4:34558 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001805219s
	[INFO] 10.244.0.4:44168 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194119s
	[INFO] 10.244.1.2:54750 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108471s
	[INFO] 10.244.1.2:46261 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008003s
	[INFO] 10.244.2.2:53899 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000130847s
	[INFO] 10.244.2.2:52030 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082631s
	[INFO] 10.244.0.4:39295 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000069381s
	[INFO] 10.244.0.4:38441 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000054252s
	[INFO] 10.244.0.4:40273 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054634s
	[INFO] 10.244.1.2:56481 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000181468s
	[INFO] 10.244.1.2:34800 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000244392s
	[INFO] 10.244.2.2:40684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136775s
	[INFO] 10.244.2.2:50964 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000154855s
	[INFO] 10.244.2.2:46132 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000089888s
	[INFO] 10.244.0.4:34246 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000124283s
	[INFO] 10.244.0.4:53924 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125381s
	[INFO] 10.244.0.4:36636 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000079286s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1858&timeout=6m16s&timeoutSeconds=376&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> describe nodes <==
	Name:               ha-694782
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-694782
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=ha-694782
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_15T23_55_34_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 23:55:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-694782
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:11:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 00:07:25 +0000   Mon, 15 Apr 2024 23:55:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 00:07:25 +0000   Mon, 15 Apr 2024 23:55:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 00:07:25 +0000   Mon, 15 Apr 2024 23:55:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 00:07:25 +0000   Mon, 15 Apr 2024 23:55:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    ha-694782
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3887d262ea345b0b06d0cfe81d3c704
	  System UUID:                e3887d26-2ea3-45b0-b06d-0cfe81d3c704
	  Boot ID:                    db04bec2-a6d7-4f51-8173-a431f51db6a3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-vsvrq             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-76f75df574-4sgv4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-76f75df574-zdc8q             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-694782                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-99cs7                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-694782             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-694782    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-d46v5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-694782             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-694782                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m17s                  kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-694782 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)      kubelet          Node ha-694782 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-694782 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-694782 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-694782 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-694782 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                    node-controller  Node ha-694782 event: Registered Node ha-694782 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-694782 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-694782 event: Registered Node ha-694782 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-694782 event: Registered Node ha-694782 in Controller
	  Warning  ContainerGCFailed        5m10s (x2 over 6m10s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m12s                  node-controller  Node ha-694782 event: Registered Node ha-694782 in Controller
	  Normal   RegisteredNode           4m8s                   node-controller  Node ha-694782 event: Registered Node ha-694782 in Controller
	  Normal   RegisteredNode           3m13s                  node-controller  Node ha-694782 event: Registered Node ha-694782 in Controller
	
	
	Name:               ha-694782-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-694782-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=ha-694782
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_15T23_56_49_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 23:56:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-694782-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:11:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 00:08:05 +0000   Tue, 16 Apr 2024 00:07:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 00:08:05 +0000   Tue, 16 Apr 2024 00:07:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 00:08:05 +0000   Tue, 16 Apr 2024 00:07:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 00:08:05 +0000   Tue, 16 Apr 2024 00:07:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.42
	  Hostname:    ha-694782-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f33e7ca96e8a461196cc015dc9cdb390
	  System UUID:                f33e7ca9-6e8a-4611-96cc-015dc9cdb390
	  Boot ID:                    cb7bc6ac-1cdb-414c-9a20-0ca4dbfea336
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-bwtdm                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-694782-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-qvp8b                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-694782-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-694782-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-vbfhn                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-694782-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-694782-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m                     kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-694782-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-694782-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-694782-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-694782-m02 event: Registered Node ha-694782-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-694782-m02 event: Registered Node ha-694782-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-694782-m02 event: Registered Node ha-694782-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-694782-m02 status is now: NodeNotReady
	  Normal  Starting                 4m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m46s (x8 over 4m46s)  kubelet          Node ha-694782-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m46s (x8 over 4m46s)  kubelet          Node ha-694782-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m46s (x7 over 4m46s)  kubelet          Node ha-694782-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-694782-m02 event: Registered Node ha-694782-m02 in Controller
	  Normal  RegisteredNode           4m8s                   node-controller  Node ha-694782-m02 event: Registered Node ha-694782-m02 in Controller
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-694782-m02 event: Registered Node ha-694782-m02 in Controller
	
	
	Name:               ha-694782-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-694782-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=ha-694782
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_15T23_58_56_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 15 Apr 2024 23:58:55 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-694782-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:09:15 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 16 Apr 2024 00:08:55 +0000   Tue, 16 Apr 2024 00:09:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 16 Apr 2024 00:08:55 +0000   Tue, 16 Apr 2024 00:09:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 16 Apr 2024 00:08:55 +0000   Tue, 16 Apr 2024 00:09:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 16 Apr 2024 00:08:55 +0000   Tue, 16 Apr 2024 00:09:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.107
	  Hostname:    ha-694782-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aceb25acc5a84fcca647b3b66273edbd
	  System UUID:                aceb25ac-c5a8-4fcc-a647-b3b66273edbd
	  Boot ID:                    881b78ba-6d0b-4ba4-9e4e-14adbfa06532
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-lxw6p    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-k6vbr               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-mgwnv            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-694782-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-694782-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-694782-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-694782-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller
	  Normal   RegisteredNode           4m8s                   node-controller  Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller
	  Normal   NodeNotReady             3m31s                  node-controller  Node ha-694782-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m13s                  node-controller  Node ha-694782-m04 event: Registered Node ha-694782-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-694782-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-694782-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-694782-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-694782-m04 has been rebooted, boot id: 881b78ba-6d0b-4ba4-9e4e-14adbfa06532
	  Normal   NodeReady                2m48s                  kubelet          Node ha-694782-m04 status is now: NodeReady
	  Normal   NodeNotReady             108s                   node-controller  Node ha-694782-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.056422] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063543] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.160170] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.142019] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.294658] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +4.386960] systemd-fstab-generator[766]: Ignoring "noauto" option for root device
	[  +0.057175] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.857368] systemd-fstab-generator[945]: Ignoring "noauto" option for root device
	[  +1.229656] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.643467] systemd-fstab-generator[1362]: Ignoring "noauto" option for root device
	[  +0.095801] kauditd_printk_skb: 40 callbacks suppressed
	[ +12.797173] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.821758] kauditd_printk_skb: 72 callbacks suppressed
	[Apr16 00:03] kauditd_printk_skb: 1 callbacks suppressed
	[Apr16 00:06] systemd-fstab-generator[3747]: Ignoring "noauto" option for root device
	[  +0.154687] systemd-fstab-generator[3759]: Ignoring "noauto" option for root device
	[  +0.183230] systemd-fstab-generator[3773]: Ignoring "noauto" option for root device
	[  +0.154031] systemd-fstab-generator[3785]: Ignoring "noauto" option for root device
	[  +0.284896] systemd-fstab-generator[3813]: Ignoring "noauto" option for root device
	[  +6.302743] systemd-fstab-generator[3919]: Ignoring "noauto" option for root device
	[  +0.087090] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.604373] kauditd_printk_skb: 42 callbacks suppressed
	[ +17.255875] kauditd_printk_skb: 56 callbacks suppressed
	[Apr16 00:07] kauditd_printk_skb: 2 callbacks suppressed
	[ +22.629454] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [8fb07b1996f5bc461dcd7a9620a81b1e6c8ba652def7f66ead633fda77b4af08] <==
	{"level":"info","ts":"2024-04-16T00:08:12.853883Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"903e0dada8362847","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:08:12.854891Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"903e0dada8362847","to":"ee1f4cd48e860d39","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-04-16T00:08:12.854947Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"903e0dada8362847","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:08:17.575347Z","caller":"traceutil/trace.go:171","msg":"trace[398373057] transaction","detail":"{read_only:false; response_revision:2402; number_of_response:1; }","duration":"110.234723ms","start":"2024-04-16T00:08:17.4651Z","end":"2024-04-16T00:08:17.575335Z","steps":["trace[398373057] 'process raft request'  (duration: 104.094338ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T00:09:08.732449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 switched to configuration voters=(10393760029520308295 15880985015929896883)"}
	{"level":"info","ts":"2024-04-16T00:09:08.734761Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"b5cacf25c2f2940e","local-member-id":"903e0dada8362847","removed-remote-peer-id":"ee1f4cd48e860d39","removed-remote-peer-urls":["https://192.168.39.202:2380"]}
	{"level":"info","ts":"2024-04-16T00:09:08.734886Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"warn","ts":"2024-04-16T00:09:08.735254Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:09:08.735458Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"warn","ts":"2024-04-16T00:09:08.734933Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"903e0dada8362847","removed-member-id":"ee1f4cd48e860d39"}
	{"level":"warn","ts":"2024-04-16T00:09:08.735812Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:09:08.735915Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"warn","ts":"2024-04-16T00:09:08.735841Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"warn","ts":"2024-04-16T00:09:08.736148Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"903e0dada8362847","removed-member-id":"ee1f4cd48e860d39"}
	{"level":"warn","ts":"2024-04-16T00:09:08.736166Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2024-04-16T00:09:08.736265Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"903e0dada8362847","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"warn","ts":"2024-04-16T00:09:08.736498Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"903e0dada8362847","remote-peer-id":"ee1f4cd48e860d39","error":"context canceled"}
	{"level":"warn","ts":"2024-04-16T00:09:08.736668Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"ee1f4cd48e860d39","error":"failed to read ee1f4cd48e860d39 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-04-16T00:09:08.736738Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"903e0dada8362847","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"warn","ts":"2024-04-16T00:09:08.737206Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"903e0dada8362847","remote-peer-id":"ee1f4cd48e860d39","error":"context canceled"}
	{"level":"info","ts":"2024-04-16T00:09:08.737334Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"903e0dada8362847","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:09:08.737387Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:09:08.737432Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"903e0dada8362847","removed-remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"warn","ts":"2024-04-16T00:09:08.748616Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"903e0dada8362847","remote-peer-id-stream-handler":"903e0dada8362847","remote-peer-id-from":"ee1f4cd48e860d39"}
	{"level":"warn","ts":"2024-04-16T00:09:08.759201Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.202:38722","server-name":"","error":"EOF"}
	
	
	==> etcd [9d17ec84664efd04bf01be034fec6b0ffd8f3e561bc06951f63cd95553952cf5] <==
	2024/04/16 00:04:55 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/16 00:04:55 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/16 00:04:55 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/16 00:04:55 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/04/16 00:04:55 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-04-16T00:04:55.866376Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.41:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T00:04:55.866653Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.41:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-16T00:04:55.866776Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"903e0dada8362847","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-04-16T00:04:55.867165Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"dc6497bb1dcd7fb3"}
	{"level":"info","ts":"2024-04-16T00:04:55.867227Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"dc6497bb1dcd7fb3"}
	{"level":"info","ts":"2024-04-16T00:04:55.867277Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"dc6497bb1dcd7fb3"}
	{"level":"info","ts":"2024-04-16T00:04:55.867426Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3"}
	{"level":"info","ts":"2024-04-16T00:04:55.867479Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3"}
	{"level":"info","ts":"2024-04-16T00:04:55.867535Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"903e0dada8362847","remote-peer-id":"dc6497bb1dcd7fb3"}
	{"level":"info","ts":"2024-04-16T00:04:55.867563Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"dc6497bb1dcd7fb3"}
	{"level":"info","ts":"2024-04-16T00:04:55.867586Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:04:55.867612Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:04:55.867656Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:04:55.867788Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"903e0dada8362847","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:04:55.867952Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"903e0dada8362847","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:04:55.868092Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"903e0dada8362847","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:04:55.86815Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"ee1f4cd48e860d39"}
	{"level":"info","ts":"2024-04-16T00:04:55.871091Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.41:2380"}
	{"level":"info","ts":"2024-04-16T00:04:55.871257Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.41:2380"}
	{"level":"info","ts":"2024-04-16T00:04:55.871302Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-694782","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.41:2380"],"advertise-client-urls":["https://192.168.39.41:2379"]}
	
	
	==> kernel <==
	 00:11:43 up 16 min,  0 users,  load average: 0.31, 0.38, 0.29
	Linux ha-694782 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [39ab5ccb1b4f0e42bd80678c58d3330ba78f59d6acdea375ddb8487b94a3e557] <==
	I0416 00:10:59.881563       1 main.go:250] Node ha-694782-m04 has CIDR [10.244.3.0/24] 
	I0416 00:11:09.893671       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0416 00:11:09.894194       1 main.go:227] handling current node
	I0416 00:11:09.894303       1 main.go:223] Handling node with IPs: map[192.168.39.42:{}]
	I0416 00:11:09.894338       1 main.go:250] Node ha-694782-m02 has CIDR [10.244.1.0/24] 
	I0416 00:11:09.894576       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0416 00:11:09.894617       1 main.go:250] Node ha-694782-m04 has CIDR [10.244.3.0/24] 
	I0416 00:11:19.904862       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0416 00:11:19.905431       1 main.go:227] handling current node
	I0416 00:11:19.905530       1 main.go:223] Handling node with IPs: map[192.168.39.42:{}]
	I0416 00:11:19.905559       1 main.go:250] Node ha-694782-m02 has CIDR [10.244.1.0/24] 
	I0416 00:11:19.905787       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0416 00:11:19.905809       1 main.go:250] Node ha-694782-m04 has CIDR [10.244.3.0/24] 
	I0416 00:11:29.920685       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0416 00:11:29.920905       1 main.go:227] handling current node
	I0416 00:11:29.921082       1 main.go:223] Handling node with IPs: map[192.168.39.42:{}]
	I0416 00:11:29.921166       1 main.go:250] Node ha-694782-m02 has CIDR [10.244.1.0/24] 
	I0416 00:11:29.921356       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0416 00:11:29.921421       1 main.go:250] Node ha-694782-m04 has CIDR [10.244.3.0/24] 
	I0416 00:11:39.954358       1 main.go:223] Handling node with IPs: map[192.168.39.41:{}]
	I0416 00:11:39.954403       1 main.go:227] handling current node
	I0416 00:11:39.954416       1 main.go:223] Handling node with IPs: map[192.168.39.42:{}]
	I0416 00:11:39.954422       1 main.go:250] Node ha-694782-m02 has CIDR [10.244.1.0/24] 
	I0416 00:11:39.954528       1 main.go:223] Handling node with IPs: map[192.168.39.107:{}]
	I0416 00:11:39.954533       1 main.go:250] Node ha-694782-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [5916fb4a53a136bf90f5630e11363bf461e58fe804cb1e286d54ad6d31f93c96] <==
	I0416 00:06:37.254246       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0416 00:06:37.254344       1 main.go:107] hostIP = 192.168.39.41
	podIP = 192.168.39.41
	I0416 00:06:37.254570       1 main.go:116] setting mtu 1500 for CNI 
	I0416 00:06:37.254597       1 main.go:146] kindnetd IP family: "ipv4"
	I0416 00:06:37.254620       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0416 00:06:37.557191       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0416 00:06:37.557624       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0416 00:06:38.560303       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0416 00:06:40.561133       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0416 00:06:53.569370       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [5f77c6b5d4b2208fa6eea38d1d96605adfcd20dd6066bc818ecf7fbfd5ce64a4] <==
	I0416 00:07:22.613731       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0416 00:07:22.613762       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0416 00:07:22.613796       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0416 00:07:22.613950       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0416 00:07:22.619154       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0416 00:07:22.703830       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 00:07:22.710710       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 00:07:22.762554       1 shared_informer.go:318] Caches are synced for configmaps
	I0416 00:07:22.763782       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 00:07:22.768501       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 00:07:22.768568       1 aggregator.go:165] initial CRD sync complete...
	I0416 00:07:22.768604       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 00:07:22.768626       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 00:07:22.768667       1 cache.go:39] Caches are synced for autoregister controller
	I0416 00:07:22.770164       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0416 00:07:22.770252       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0416 00:07:22.777094       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0416 00:07:22.779545       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	W0416 00:07:22.788572       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.202 192.168.39.42]
	I0416 00:07:22.789914       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 00:07:22.797232       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0416 00:07:22.801159       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0416 00:07:23.566521       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0416 00:07:24.017955       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.202 192.168.39.41 192.168.39.42]
	W0416 00:07:34.033455       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.41 192.168.39.42]
	
	
	==> kube-apiserver [e2b6c0ffbd95d979207c1b762faf871abef4bb7cf28b4e6d08021db0172f9168] <==
	I0416 00:06:41.762492       1 options.go:222] external host was not specified, using 192.168.39.41
	I0416 00:06:41.766786       1 server.go:148] Version: v1.29.3
	I0416 00:06:41.766835       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 00:06:42.411116       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0416 00:06:42.418839       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0416 00:06:42.418928       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0416 00:06:42.419248       1 instance.go:297] Using reconciler: lease
	W0416 00:07:02.409306       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0416 00:07:02.410718       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0416 00:07:02.420373       1 instance.go:290] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [7cecd296d2e7c995524a52118c632cabb6ea81bed6f1db33f69f2986ef86204d] <==
	I0416 00:06:42.650659       1 serving.go:380] Generated self-signed cert in-memory
	I0416 00:06:43.060651       1 controllermanager.go:187] "Starting" version="v1.29.3"
	I0416 00:06:43.060748       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 00:06:43.062884       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0416 00:06:43.063218       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0416 00:06:43.064054       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0416 00:06:43.064153       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0416 00:07:03.426338       1 controllermanager.go:232] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.41:8443/healthz\": dial tcp 192.168.39.41:8443: connect: connection refused"
	
	
	==> kube-controller-manager [ad6b524648067e74a7474e617616cbf07b5cfa3884641ba982cdebdcb006d1b1] <==
	I0416 00:09:05.646384       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="97.228µs"
	I0416 00:09:07.503424       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="115.924µs"
	I0416 00:09:07.959589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="76.824µs"
	I0416 00:09:07.982338       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="103.986µs"
	I0416 00:09:07.988838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="87.9µs"
	I0416 00:09:08.532417       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="16.606761ms"
	I0416 00:09:08.532646       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="67.406µs"
	I0416 00:09:20.272037       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-694782-m04"
	I0416 00:09:20.859881       1 event.go:376] "Event occurred" object="ha-694782-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node ha-694782-m03 event: Removing Node ha-694782-m03 from Controller"
	E0416 00:09:35.789151       1 gc_controller.go:153] "Failed to get node" err="node \"ha-694782-m03\" not found" node="ha-694782-m03"
	E0416 00:09:35.789294       1 gc_controller.go:153] "Failed to get node" err="node \"ha-694782-m03\" not found" node="ha-694782-m03"
	E0416 00:09:35.789322       1 gc_controller.go:153] "Failed to get node" err="node \"ha-694782-m03\" not found" node="ha-694782-m03"
	E0416 00:09:35.789346       1 gc_controller.go:153] "Failed to get node" err="node \"ha-694782-m03\" not found" node="ha-694782-m03"
	E0416 00:09:35.789370       1 gc_controller.go:153] "Failed to get node" err="node \"ha-694782-m03\" not found" node="ha-694782-m03"
	E0416 00:09:55.789567       1 gc_controller.go:153] "Failed to get node" err="node \"ha-694782-m03\" not found" node="ha-694782-m03"
	E0416 00:09:55.789625       1 gc_controller.go:153] "Failed to get node" err="node \"ha-694782-m03\" not found" node="ha-694782-m03"
	E0416 00:09:55.789639       1 gc_controller.go:153] "Failed to get node" err="node \"ha-694782-m03\" not found" node="ha-694782-m03"
	E0416 00:09:55.789649       1 gc_controller.go:153] "Failed to get node" err="node \"ha-694782-m03\" not found" node="ha-694782-m03"
	E0416 00:09:55.789657       1 gc_controller.go:153] "Failed to get node" err="node \"ha-694782-m03\" not found" node="ha-694782-m03"
	I0416 00:09:55.895108       1 event.go:376] "Event occurred" object="ha-694782-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-694782-m04 status is now: NodeNotReady"
	I0416 00:09:55.927937       1 event.go:376] "Event occurred" object="kube-system/kindnet-k6vbr" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:09:55.965745       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-mgwnv" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:09:55.999417       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-lxw6p" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:09:56.071419       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="68.186677ms"
	I0416 00:09:56.071650       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="79.317µs"
	
	
	==> kube-proxy [3c5fc74cb4adad4166f2600954b0cdf2132c9dfab6bcebaa4ca82a828c264cc9] <==
	I0416 00:06:42.858915       1 server_others.go:72] "Using iptables proxy"
	E0416 00:06:45.864857       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-694782\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0416 00:06:48.938054       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-694782\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0416 00:06:52.010357       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-694782\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0416 00:06:58.153144       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-694782\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0416 00:07:07.368849       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-694782\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0416 00:07:25.802672       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-694782\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0416 00:07:25.803209       1 server.go:1020] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	I0416 00:07:25.941476       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 00:07:25.941643       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 00:07:25.942453       1 server_others.go:168] "Using iptables Proxier"
	I0416 00:07:25.947634       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 00:07:25.948301       1 server.go:865] "Version info" version="v1.29.3"
	I0416 00:07:25.948413       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 00:07:25.950904       1 config.go:188] "Starting service config controller"
	I0416 00:07:25.952412       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 00:07:25.952676       1 config.go:97] "Starting endpoint slice config controller"
	I0416 00:07:25.952710       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 00:07:25.954092       1 config.go:315] "Starting node config controller"
	I0416 00:07:25.954153       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 00:07:26.053241       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 00:07:26.053349       1 shared_informer.go:318] Caches are synced for service config
	I0416 00:07:26.056248       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [b55cb00c20162f1cfd9e72b8001f61983630aeb30b827f36d39067dae5d359d7] <==
	E0416 00:03:38.472655       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:03:41.544468       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1896": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:03:41.544600       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1896": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:03:44.616420       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:03:44.616530       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:03:44.616699       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-694782&resourceVersion=1824": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:03:44.616762       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-694782&resourceVersion=1824": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:03:47.688819       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1896": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:03:47.688931       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1896": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:03:53.833870       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-694782&resourceVersion=1824": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:03:53.834157       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-694782&resourceVersion=1824": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:03:56.905187       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:03:56.905285       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:03:59.980235       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1896": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:03:59.980295       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1896": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:04:12.266432       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:04:12.266502       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:04:15.338197       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1896": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:04:15.338258       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1896": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:04:15.338403       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-694782&resourceVersion=1824": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:04:15.338459       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-694782&resourceVersion=1824": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:04:52.201502       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-694782&resourceVersion=1824": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:04:52.201909       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-694782&resourceVersion=1824": dial tcp 192.168.39.254:8443: connect: no route to host
	W0416 00:04:52.201865       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	E0416 00:04:52.201962       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1805": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [553d7f07f43e6f068bf41c8f0562f161939b4c2f6b1241c11c0db16309a6cbdf] <==
	E0416 00:04:51.481372       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 00:04:51.592406       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 00:04:51.592476       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 00:04:51.593272       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 00:04:51.593322       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 00:04:51.777335       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 00:04:51.777441       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 00:04:51.982388       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 00:04:51.982484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 00:04:51.996801       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 00:04:51.996893       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 00:04:52.294230       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 00:04:52.294281       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 00:04:52.370669       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 00:04:52.370760       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 00:04:52.460441       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 00:04:52.460546       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 00:04:54.931842       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 00:04:54.931867       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 00:04:55.360249       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 00:04:55.360343       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0416 00:04:55.788958       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0416 00:04:55.794313       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0416 00:04:55.795184       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0416 00:04:55.802469       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f1a812f61cab74eac095d0eaf25f044bcd3c0b40542f648850ed3358c4a9cf07] <==
	W0416 00:07:18.590806       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.41:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0416 00:07:18.590882       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.41:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0416 00:07:18.647067       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://192.168.39.41:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0416 00:07:18.647151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.41:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0416 00:07:19.286748       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: Get "https://192.168.39.41:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0416 00:07:19.286866       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.41:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0416 00:07:19.972668       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: Get "https://192.168.39.41:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0416 00:07:19.972737       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.41:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0416 00:07:20.547074       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: Get "https://192.168.39.41:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	E0416 00:07:20.547138       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.41:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.41:8443: connect: connection refused
	W0416 00:07:22.640073       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 00:07:22.641024       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 00:07:22.640608       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 00:07:22.641264       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 00:07:22.640817       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 00:07:22.643294       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0416 00:07:22.640937       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 00:07:22.643420       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 00:07:22.643478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 00:07:22.643422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0416 00:07:38.034632       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0416 00:09:05.404500       1 framework.go:1244] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-lxw6p\": pod busybox-7fdf7869d9-lxw6p is already assigned to node \"ha-694782-m04\"" plugin="DefaultBinder" pod="default/busybox-7fdf7869d9-lxw6p" node="ha-694782-m04"
	E0416 00:09:05.406083       1 schedule_one.go:336] "scheduler cache ForgetPod failed" err="pod 998cb50d-8271-4d88-ba83-5cea3cdf1dfe(default/busybox-7fdf7869d9-lxw6p) wasn't assumed so cannot be forgotten"
	E0416 00:09:05.406556       1 schedule_one.go:1003] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7fdf7869d9-lxw6p\": pod busybox-7fdf7869d9-lxw6p is already assigned to node \"ha-694782-m04\"" pod="default/busybox-7fdf7869d9-lxw6p"
	I0416 00:09:05.406686       1 schedule_one.go:1016] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7fdf7869d9-lxw6p" node="ha-694782-m04"
	
	
	==> kubelet <==
	Apr 16 00:07:33 ha-694782 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 00:07:33 ha-694782 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 00:07:38 ha-694782 kubelet[1369]: I0416 00:07:38.588141    1369 scope.go:117] "RemoveContainer" containerID="5916fb4a53a136bf90f5630e11363bf461e58fe804cb1e286d54ad6d31f93c96"
	Apr 16 00:08:11 ha-694782 kubelet[1369]: I0416 00:08:11.587557    1369 kubelet.go:1903] "Trying to delete pod" pod="kube-system/kube-vip-ha-694782" podUID="a8ffb1b9-f55e-4efe-b9a1-7e58a341a2f0"
	Apr 16 00:08:11 ha-694782 kubelet[1369]: I0416 00:08:11.613949    1369 kubelet.go:1908] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-694782"
	Apr 16 00:08:33 ha-694782 kubelet[1369]: E0416 00:08:33.658907    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 00:08:33 ha-694782 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 00:08:33 ha-694782 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 00:08:33 ha-694782 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 00:08:33 ha-694782 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 00:09:33 ha-694782 kubelet[1369]: E0416 00:09:33.658729    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 00:09:33 ha-694782 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 00:09:33 ha-694782 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 00:09:33 ha-694782 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 00:09:33 ha-694782 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 00:10:33 ha-694782 kubelet[1369]: E0416 00:10:33.659442    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 00:10:33 ha-694782 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 00:10:33 ha-694782 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 00:10:33 ha-694782 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 00:10:33 ha-694782 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 00:11:33 ha-694782 kubelet[1369]: E0416 00:11:33.656425    1369 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 00:11:33 ha-694782 kubelet[1369]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 00:11:33 ha-694782 kubelet[1369]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 00:11:33 ha-694782 kubelet[1369]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 00:11:33 ha-694782 kubelet[1369]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:11:42.146580   34131 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18647-7542/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-694782 -n ha-694782
helpers_test.go:261: (dbg) Run:  kubectl --context ha-694782 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (333.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-414194
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-414194
E0416 00:27:20.169290   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0416 00:28:58.680662   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-414194: exit status 82 (2m2.6869634s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-414194-m03"  ...
	* Stopping node "multinode-414194-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-414194" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-414194 --wait=true -v=8 --alsologtostderr
E0416 00:32:01.725053   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0416 00:32:20.169602   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-414194 --wait=true -v=8 --alsologtostderr: (3m28.694370336s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-414194
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-414194 -n multinode-414194
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-414194 logs -n 25: (1.664843708s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-414194 ssh -n                                                                 | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-414194 cp multinode-414194-m02:/home/docker/cp-test.txt                       | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1982584427/001/cp-test_multinode-414194-m02.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n                                                                 | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-414194 cp multinode-414194-m02:/home/docker/cp-test.txt                       | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194:/home/docker/cp-test_multinode-414194-m02_multinode-414194.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n                                                                 | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n multinode-414194 sudo cat                                       | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | /home/docker/cp-test_multinode-414194-m02_multinode-414194.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-414194 cp multinode-414194-m02:/home/docker/cp-test.txt                       | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m03:/home/docker/cp-test_multinode-414194-m02_multinode-414194-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n                                                                 | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n multinode-414194-m03 sudo cat                                   | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | /home/docker/cp-test_multinode-414194-m02_multinode-414194-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-414194 cp testdata/cp-test.txt                                                | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n                                                                 | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-414194 cp multinode-414194-m03:/home/docker/cp-test.txt                       | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1982584427/001/cp-test_multinode-414194-m03.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n                                                                 | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-414194 cp multinode-414194-m03:/home/docker/cp-test.txt                       | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194:/home/docker/cp-test_multinode-414194-m03_multinode-414194.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n                                                                 | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n multinode-414194 sudo cat                                       | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | /home/docker/cp-test_multinode-414194-m03_multinode-414194.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-414194 cp multinode-414194-m03:/home/docker/cp-test.txt                       | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m02:/home/docker/cp-test_multinode-414194-m03_multinode-414194-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n                                                                 | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n multinode-414194-m02 sudo cat                                   | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | /home/docker/cp-test_multinode-414194-m03_multinode-414194-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-414194 node stop m03                                                          | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	| node    | multinode-414194 node start                                                             | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:27 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |                |                     |                     |
	| node    | list -p multinode-414194                                                                | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:27 UTC |                     |
	| stop    | -p multinode-414194                                                                     | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:27 UTC |                     |
	| start   | -p multinode-414194                                                                     | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:29 UTC | 16 Apr 24 00:32 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-414194                                                                | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:32 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 00:29:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 00:29:04.611777   44065 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:29:04.612028   44065 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:29:04.612037   44065 out.go:304] Setting ErrFile to fd 2...
	I0416 00:29:04.612040   44065 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:29:04.612193   44065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:29:04.612691   44065 out.go:298] Setting JSON to false
	I0416 00:29:04.613591   44065 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4289,"bootTime":1713223056,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 00:29:04.613651   44065 start.go:139] virtualization: kvm guest
	I0416 00:29:04.615948   44065 out.go:177] * [multinode-414194] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 00:29:04.617130   44065 notify.go:220] Checking for updates...
	I0416 00:29:04.617143   44065 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 00:29:04.618373   44065 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 00:29:04.619680   44065 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 00:29:04.620944   44065 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:29:04.622283   44065 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 00:29:04.623752   44065 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 00:29:04.625626   44065 config.go:182] Loaded profile config "multinode-414194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:29:04.625785   44065 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 00:29:04.626414   44065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:29:04.626468   44065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:29:04.641083   44065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34345
	I0416 00:29:04.641599   44065 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:29:04.642240   44065 main.go:141] libmachine: Using API Version  1
	I0416 00:29:04.642269   44065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:29:04.642573   44065 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:29:04.642739   44065 main.go:141] libmachine: (multinode-414194) Calling .DriverName
	I0416 00:29:04.678275   44065 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 00:29:04.679632   44065 start.go:297] selected driver: kvm2
	I0416 00:29:04.679642   44065 start.go:901] validating driver "kvm2" against &{Name:multinode-414194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:multinode-414194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.81 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.64 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:29:04.679788   44065 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 00:29:04.680081   44065 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:29:04.680145   44065 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 00:29:04.694722   44065 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 00:29:04.695357   44065 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 00:29:04.695441   44065 cni.go:84] Creating CNI manager for ""
	I0416 00:29:04.695453   44065 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0416 00:29:04.695503   44065 start.go:340] cluster config:
	{Name:multinode-414194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-414194 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.81 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.64 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:29:04.695621   44065 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:29:04.697334   44065 out.go:177] * Starting "multinode-414194" primary control-plane node in "multinode-414194" cluster
	I0416 00:29:04.698500   44065 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 00:29:04.698532   44065 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 00:29:04.698553   44065 cache.go:56] Caching tarball of preloaded images
	I0416 00:29:04.698637   44065 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 00:29:04.698654   44065 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 00:29:04.698802   44065 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/config.json ...
	I0416 00:29:04.699032   44065 start.go:360] acquireMachinesLock for multinode-414194: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:29:04.699093   44065 start.go:364] duration metric: took 41.155µs to acquireMachinesLock for "multinode-414194"
	I0416 00:29:04.699113   44065 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:29:04.699128   44065 fix.go:54] fixHost starting: 
	I0416 00:29:04.699522   44065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:29:04.699568   44065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:29:04.713406   44065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44193
	I0416 00:29:04.713897   44065 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:29:04.714349   44065 main.go:141] libmachine: Using API Version  1
	I0416 00:29:04.714373   44065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:29:04.714715   44065 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:29:04.714903   44065 main.go:141] libmachine: (multinode-414194) Calling .DriverName
	I0416 00:29:04.715086   44065 main.go:141] libmachine: (multinode-414194) Calling .GetState
	I0416 00:29:04.716669   44065 fix.go:112] recreateIfNeeded on multinode-414194: state=Running err=<nil>
	W0416 00:29:04.716683   44065 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:29:04.718656   44065 out.go:177] * Updating the running kvm2 "multinode-414194" VM ...
	I0416 00:29:04.720105   44065 machine.go:94] provisionDockerMachine start ...
	I0416 00:29:04.720128   44065 main.go:141] libmachine: (multinode-414194) Calling .DriverName
	I0416 00:29:04.720333   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:29:04.722818   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:04.723230   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:29:04.723258   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:04.723406   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHPort
	I0416 00:29:04.723585   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:29:04.723724   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:29:04.723840   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHUsername
	I0416 00:29:04.724008   44065 main.go:141] libmachine: Using SSH client type: native
	I0416 00:29:04.724187   44065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0416 00:29:04.724199   44065 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:29:04.842386   44065 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-414194
	
	I0416 00:29:04.842418   44065 main.go:141] libmachine: (multinode-414194) Calling .GetMachineName
	I0416 00:29:04.842684   44065 buildroot.go:166] provisioning hostname "multinode-414194"
	I0416 00:29:04.842706   44065 main.go:141] libmachine: (multinode-414194) Calling .GetMachineName
	I0416 00:29:04.842888   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:29:04.845345   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:04.845769   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:29:04.845809   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:04.845931   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHPort
	I0416 00:29:04.846092   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:29:04.846238   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:29:04.846360   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHUsername
	I0416 00:29:04.846497   44065 main.go:141] libmachine: Using SSH client type: native
	I0416 00:29:04.846708   44065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0416 00:29:04.846731   44065 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-414194 && echo "multinode-414194" | sudo tee /etc/hostname
	I0416 00:29:04.979172   44065 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-414194
	
	I0416 00:29:04.979206   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:29:04.982200   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:04.982586   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:29:04.982618   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:04.982847   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHPort
	I0416 00:29:04.983032   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:29:04.983201   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:29:04.983322   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHUsername
	I0416 00:29:04.983496   44065 main.go:141] libmachine: Using SSH client type: native
	I0416 00:29:04.983710   44065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0416 00:29:04.983733   44065 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-414194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-414194/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-414194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:29:05.102097   44065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:29:05.102128   44065 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:29:05.102183   44065 buildroot.go:174] setting up certificates
	I0416 00:29:05.102196   44065 provision.go:84] configureAuth start
	I0416 00:29:05.102209   44065 main.go:141] libmachine: (multinode-414194) Calling .GetMachineName
	I0416 00:29:05.102498   44065 main.go:141] libmachine: (multinode-414194) Calling .GetIP
	I0416 00:29:05.105308   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:05.105684   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:29:05.105710   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:05.105854   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:29:05.108148   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:05.108548   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:29:05.108578   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:05.108646   44065 provision.go:143] copyHostCerts
	I0416 00:29:05.108680   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:29:05.108719   44065 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:29:05.108739   44065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:29:05.108833   44065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:29:05.108931   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:29:05.108950   44065 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:29:05.108954   44065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:29:05.108983   44065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:29:05.109037   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:29:05.109052   44065 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:29:05.109059   44065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:29:05.109079   44065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:29:05.109134   44065 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.multinode-414194 san=[127.0.0.1 192.168.39.140 localhost minikube multinode-414194]
	I0416 00:29:05.233267   44065 provision.go:177] copyRemoteCerts
	I0416 00:29:05.233325   44065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:29:05.233359   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:29:05.236053   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:05.236396   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:29:05.236422   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:05.236653   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHPort
	I0416 00:29:05.236821   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:29:05.236979   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHUsername
	I0416 00:29:05.237128   44065 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/multinode-414194/id_rsa Username:docker}
	I0416 00:29:05.331778   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0416 00:29:05.331838   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:29:05.359491   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0416 00:29:05.359555   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0416 00:29:05.386268   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0416 00:29:05.386335   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 00:29:05.413265   44065 provision.go:87] duration metric: took 311.057324ms to configureAuth
	I0416 00:29:05.413292   44065 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:29:05.413504   44065 config.go:182] Loaded profile config "multinode-414194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:29:05.413578   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:29:05.416287   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:05.416649   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:29:05.416679   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:05.416878   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHPort
	I0416 00:29:05.417070   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:29:05.417267   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:29:05.417472   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHUsername
	I0416 00:29:05.417730   44065 main.go:141] libmachine: Using SSH client type: native
	I0416 00:29:05.417901   44065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0416 00:29:05.417917   44065 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:30:36.238081   44065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:30:36.238109   44065 machine.go:97] duration metric: took 1m31.517984271s to provisionDockerMachine
	I0416 00:30:36.238131   44065 start.go:293] postStartSetup for "multinode-414194" (driver="kvm2")
	I0416 00:30:36.238182   44065 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:30:36.238209   44065 main.go:141] libmachine: (multinode-414194) Calling .DriverName
	I0416 00:30:36.238555   44065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:30:36.238585   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:30:36.242029   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.242631   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:30:36.242656   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.242878   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHPort
	I0416 00:30:36.243042   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:30:36.243238   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHUsername
	I0416 00:30:36.243372   44065 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/multinode-414194/id_rsa Username:docker}
	I0416 00:30:36.333914   44065 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:30:36.338417   44065 command_runner.go:130] > NAME=Buildroot
	I0416 00:30:36.338442   44065 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0416 00:30:36.338447   44065 command_runner.go:130] > ID=buildroot
	I0416 00:30:36.338455   44065 command_runner.go:130] > VERSION_ID=2023.02.9
	I0416 00:30:36.338462   44065 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0416 00:30:36.338566   44065 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:30:36.338589   44065 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:30:36.338654   44065 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:30:36.338742   44065 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:30:36.338752   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /etc/ssl/certs/148972.pem
	I0416 00:30:36.338875   44065 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:30:36.349499   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:30:36.374108   44065 start.go:296] duration metric: took 135.964233ms for postStartSetup
	I0416 00:30:36.374149   44065 fix.go:56] duration metric: took 1m31.675027259s for fixHost
	I0416 00:30:36.374171   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:30:36.376661   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.377067   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:30:36.377113   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.377231   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHPort
	I0416 00:30:36.377443   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:30:36.377603   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:30:36.377736   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHUsername
	I0416 00:30:36.377904   44065 main.go:141] libmachine: Using SSH client type: native
	I0416 00:30:36.378107   44065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0416 00:30:36.378123   44065 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 00:30:36.494421   44065 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713227436.475672000
	
	I0416 00:30:36.494443   44065 fix.go:216] guest clock: 1713227436.475672000
	I0416 00:30:36.494449   44065 fix.go:229] Guest: 2024-04-16 00:30:36.475672 +0000 UTC Remote: 2024-04-16 00:30:36.37415442 +0000 UTC m=+91.808381284 (delta=101.51758ms)
	I0416 00:30:36.494465   44065 fix.go:200] guest clock delta is within tolerance: 101.51758ms
	I0416 00:30:36.494470   44065 start.go:83] releasing machines lock for "multinode-414194", held for 1m31.795365085s
	I0416 00:30:36.494486   44065 main.go:141] libmachine: (multinode-414194) Calling .DriverName
	I0416 00:30:36.494732   44065 main.go:141] libmachine: (multinode-414194) Calling .GetIP
	I0416 00:30:36.497442   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.497789   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:30:36.497819   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.497932   44065 main.go:141] libmachine: (multinode-414194) Calling .DriverName
	I0416 00:30:36.498427   44065 main.go:141] libmachine: (multinode-414194) Calling .DriverName
	I0416 00:30:36.498569   44065 main.go:141] libmachine: (multinode-414194) Calling .DriverName
	I0416 00:30:36.498655   44065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:30:36.498702   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:30:36.498757   44065 ssh_runner.go:195] Run: cat /version.json
	I0416 00:30:36.498776   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:30:36.501183   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.501317   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.501568   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:30:36.501595   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.501690   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:30:36.501698   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHPort
	I0416 00:30:36.501713   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.501891   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:30:36.501910   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHPort
	I0416 00:30:36.502066   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHUsername
	I0416 00:30:36.502069   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:30:36.502265   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHUsername
	I0416 00:30:36.502261   44065 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/multinode-414194/id_rsa Username:docker}
	I0416 00:30:36.502401   44065 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/multinode-414194/id_rsa Username:docker}
	I0416 00:30:36.616889   44065 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0416 00:30:36.616966   44065 command_runner.go:130] > {"iso_version": "v1.33.0-1713175573-18634", "kicbase_version": "v0.0.43-1712854342-18621", "minikube_version": "v1.33.0-beta.0", "commit": "0ece0b4c602cbaab0821f0ba2d6ec4a07a392655"}
	I0416 00:30:36.617070   44065 ssh_runner.go:195] Run: systemctl --version
	I0416 00:30:36.623181   44065 command_runner.go:130] > systemd 252 (252)
	I0416 00:30:36.623209   44065 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0416 00:30:36.623479   44065 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:30:36.783597   44065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 00:30:36.792651   44065 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0416 00:30:36.792741   44065 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:30:36.792799   44065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:30:36.802740   44065 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0416 00:30:36.802761   44065 start.go:494] detecting cgroup driver to use...
	I0416 00:30:36.802830   44065 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:30:36.820329   44065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:30:36.835812   44065 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:30:36.835864   44065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:30:36.851450   44065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:30:36.866490   44065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:30:37.009540   44065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 00:30:37.153556   44065 docker.go:233] disabling docker service ...
	I0416 00:30:37.153614   44065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 00:30:37.170696   44065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 00:30:37.191023   44065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 00:30:37.380285   44065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 00:30:37.550184   44065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 00:30:37.566371   44065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 00:30:37.586211   44065 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0416 00:30:37.586814   44065 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 00:30:37.586887   44065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:30:37.597670   44065 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 00:30:37.597736   44065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:30:37.608584   44065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:30:37.619877   44065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:30:37.631362   44065 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 00:30:37.643227   44065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:30:37.655073   44065 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:30:37.667756   44065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:30:37.679019   44065 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 00:30:37.688890   44065 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0416 00:30:37.688987   44065 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 00:30:37.698889   44065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:30:37.834034   44065 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 00:30:38.110619   44065 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 00:30:38.110697   44065 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 00:30:38.115746   44065 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0416 00:30:38.115776   44065 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0416 00:30:38.115786   44065 command_runner.go:130] > Device: 0,22	Inode: 1384        Links: 1
	I0416 00:30:38.115795   44065 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0416 00:30:38.115807   44065 command_runner.go:130] > Access: 2024-04-16 00:30:37.959207173 +0000
	I0416 00:30:38.115817   44065 command_runner.go:130] > Modify: 2024-04-16 00:30:37.959207173 +0000
	I0416 00:30:38.115824   44065 command_runner.go:130] > Change: 2024-04-16 00:30:37.959207173 +0000
	I0416 00:30:38.115829   44065 command_runner.go:130] >  Birth: -
	I0416 00:30:38.115847   44065 start.go:562] Will wait 60s for crictl version
	I0416 00:30:38.115894   44065 ssh_runner.go:195] Run: which crictl
	I0416 00:30:38.119884   44065 command_runner.go:130] > /usr/bin/crictl
	I0416 00:30:38.119961   44065 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 00:30:38.165637   44065 command_runner.go:130] > Version:  0.1.0
	I0416 00:30:38.165659   44065 command_runner.go:130] > RuntimeName:  cri-o
	I0416 00:30:38.165727   44065 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0416 00:30:38.165766   44065 command_runner.go:130] > RuntimeApiVersion:  v1
	I0416 00:30:38.167033   44065 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 00:30:38.167114   44065 ssh_runner.go:195] Run: crio --version
	I0416 00:30:38.195845   44065 command_runner.go:130] > crio version 1.29.1
	I0416 00:30:38.195871   44065 command_runner.go:130] > Version:        1.29.1
	I0416 00:30:38.195881   44065 command_runner.go:130] > GitCommit:      unknown
	I0416 00:30:38.195887   44065 command_runner.go:130] > GitCommitDate:  unknown
	I0416 00:30:38.195891   44065 command_runner.go:130] > GitTreeState:   clean
	I0416 00:30:38.195897   44065 command_runner.go:130] > BuildDate:      2024-04-15T15:42:51Z
	I0416 00:30:38.195901   44065 command_runner.go:130] > GoVersion:      go1.21.6
	I0416 00:30:38.195905   44065 command_runner.go:130] > Compiler:       gc
	I0416 00:30:38.195909   44065 command_runner.go:130] > Platform:       linux/amd64
	I0416 00:30:38.195914   44065 command_runner.go:130] > Linkmode:       dynamic
	I0416 00:30:38.195921   44065 command_runner.go:130] > BuildTags:      
	I0416 00:30:38.195932   44065 command_runner.go:130] >   containers_image_ostree_stub
	I0416 00:30:38.195939   44065 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0416 00:30:38.195947   44065 command_runner.go:130] >   btrfs_noversion
	I0416 00:30:38.195954   44065 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0416 00:30:38.195967   44065 command_runner.go:130] >   libdm_no_deferred_remove
	I0416 00:30:38.195972   44065 command_runner.go:130] >   seccomp
	I0416 00:30:38.195977   44065 command_runner.go:130] > LDFlags:          unknown
	I0416 00:30:38.195982   44065 command_runner.go:130] > SeccompEnabled:   true
	I0416 00:30:38.195986   44065 command_runner.go:130] > AppArmorEnabled:  false
	I0416 00:30:38.196066   44065 ssh_runner.go:195] Run: crio --version
	I0416 00:30:38.227996   44065 command_runner.go:130] > crio version 1.29.1
	I0416 00:30:38.228018   44065 command_runner.go:130] > Version:        1.29.1
	I0416 00:30:38.228024   44065 command_runner.go:130] > GitCommit:      unknown
	I0416 00:30:38.228028   44065 command_runner.go:130] > GitCommitDate:  unknown
	I0416 00:30:38.228046   44065 command_runner.go:130] > GitTreeState:   clean
	I0416 00:30:38.228052   44065 command_runner.go:130] > BuildDate:      2024-04-15T15:42:51Z
	I0416 00:30:38.228057   44065 command_runner.go:130] > GoVersion:      go1.21.6
	I0416 00:30:38.228062   44065 command_runner.go:130] > Compiler:       gc
	I0416 00:30:38.228066   44065 command_runner.go:130] > Platform:       linux/amd64
	I0416 00:30:38.228071   44065 command_runner.go:130] > Linkmode:       dynamic
	I0416 00:30:38.228076   44065 command_runner.go:130] > BuildTags:      
	I0416 00:30:38.228081   44065 command_runner.go:130] >   containers_image_ostree_stub
	I0416 00:30:38.228085   44065 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0416 00:30:38.228092   44065 command_runner.go:130] >   btrfs_noversion
	I0416 00:30:38.228096   44065 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0416 00:30:38.228101   44065 command_runner.go:130] >   libdm_no_deferred_remove
	I0416 00:30:38.228104   44065 command_runner.go:130] >   seccomp
	I0416 00:30:38.228109   44065 command_runner.go:130] > LDFlags:          unknown
	I0416 00:30:38.228113   44065 command_runner.go:130] > SeccompEnabled:   true
	I0416 00:30:38.228118   44065 command_runner.go:130] > AppArmorEnabled:  false
	I0416 00:30:38.231733   44065 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 00:30:38.233308   44065 main.go:141] libmachine: (multinode-414194) Calling .GetIP
	I0416 00:30:38.235915   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:38.236292   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:30:38.236313   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:38.236534   44065 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 00:30:38.240626   44065 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0416 00:30:38.240783   44065 kubeadm.go:877] updating cluster {Name:multinode-414194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:multinode-414194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.81 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.64 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 00:30:38.240911   44065 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 00:30:38.240960   44065 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:30:38.284234   44065 command_runner.go:130] > {
	I0416 00:30:38.284255   44065 command_runner.go:130] >   "images": [
	I0416 00:30:38.284259   44065 command_runner.go:130] >     {
	I0416 00:30:38.284271   44065 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0416 00:30:38.284277   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.284284   44065 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0416 00:30:38.284288   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284292   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.284300   44065 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0416 00:30:38.284309   44065 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0416 00:30:38.284315   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284323   44065 command_runner.go:130] >       "size": "65291810",
	I0416 00:30:38.284329   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.284335   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.284343   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.284348   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.284352   44065 command_runner.go:130] >     },
	I0416 00:30:38.284355   44065 command_runner.go:130] >     {
	I0416 00:30:38.284361   44065 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0416 00:30:38.284365   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.284374   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0416 00:30:38.284384   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284391   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.284402   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0416 00:30:38.284415   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0416 00:30:38.284421   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284429   44065 command_runner.go:130] >       "size": "1363676",
	I0416 00:30:38.284435   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.284446   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.284450   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.284454   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.284457   44065 command_runner.go:130] >     },
	I0416 00:30:38.284461   44065 command_runner.go:130] >     {
	I0416 00:30:38.284468   44065 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0416 00:30:38.284472   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.284477   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0416 00:30:38.284481   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284488   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.284501   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0416 00:30:38.284517   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0416 00:30:38.284523   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284533   44065 command_runner.go:130] >       "size": "31470524",
	I0416 00:30:38.284539   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.284546   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.284554   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.284558   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.284564   44065 command_runner.go:130] >     },
	I0416 00:30:38.284567   44065 command_runner.go:130] >     {
	I0416 00:30:38.284575   44065 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0416 00:30:38.284585   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.284597   44065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0416 00:30:38.284606   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284614   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.284628   44065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0416 00:30:38.284649   44065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0416 00:30:38.284655   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284660   44065 command_runner.go:130] >       "size": "61245718",
	I0416 00:30:38.284676   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.284686   44065 command_runner.go:130] >       "username": "nonroot",
	I0416 00:30:38.284693   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.284703   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.284711   44065 command_runner.go:130] >     },
	I0416 00:30:38.284717   44065 command_runner.go:130] >     {
	I0416 00:30:38.284730   44065 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0416 00:30:38.284736   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.284744   44065 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0416 00:30:38.284750   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284760   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.284774   44065 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0416 00:30:38.284788   44065 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0416 00:30:38.284797   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284804   44065 command_runner.go:130] >       "size": "150779692",
	I0416 00:30:38.284813   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.284819   44065 command_runner.go:130] >         "value": "0"
	I0416 00:30:38.284826   44065 command_runner.go:130] >       },
	I0416 00:30:38.284830   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.284836   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.284844   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.284851   44065 command_runner.go:130] >     },
	I0416 00:30:38.284859   44065 command_runner.go:130] >     {
	I0416 00:30:38.284869   44065 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0416 00:30:38.284878   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.284886   44065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0416 00:30:38.284913   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284925   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.284940   44065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0416 00:30:38.284955   44065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0416 00:30:38.284964   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284971   44065 command_runner.go:130] >       "size": "128508878",
	I0416 00:30:38.284979   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.284985   44065 command_runner.go:130] >         "value": "0"
	I0416 00:30:38.284999   44065 command_runner.go:130] >       },
	I0416 00:30:38.285003   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.285023   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.285034   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.285040   44065 command_runner.go:130] >     },
	I0416 00:30:38.285048   44065 command_runner.go:130] >     {
	I0416 00:30:38.285058   44065 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0416 00:30:38.285117   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.285131   44065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0416 00:30:38.285138   44065 command_runner.go:130] >       ],
	I0416 00:30:38.285144   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.285170   44065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0416 00:30:38.285186   44065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0416 00:30:38.285194   44065 command_runner.go:130] >       ],
	I0416 00:30:38.285200   44065 command_runner.go:130] >       "size": "123142962",
	I0416 00:30:38.285208   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.285214   44065 command_runner.go:130] >         "value": "0"
	I0416 00:30:38.285222   44065 command_runner.go:130] >       },
	I0416 00:30:38.285228   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.285236   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.285242   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.285250   44065 command_runner.go:130] >     },
	I0416 00:30:38.285255   44065 command_runner.go:130] >     {
	I0416 00:30:38.285269   44065 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0416 00:30:38.285278   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.285286   44065 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0416 00:30:38.285294   44065 command_runner.go:130] >       ],
	I0416 00:30:38.285301   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.285338   44065 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0416 00:30:38.285353   44065 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0416 00:30:38.285361   44065 command_runner.go:130] >       ],
	I0416 00:30:38.285369   44065 command_runner.go:130] >       "size": "83634073",
	I0416 00:30:38.285377   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.285381   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.285385   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.285389   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.285393   44065 command_runner.go:130] >     },
	I0416 00:30:38.285396   44065 command_runner.go:130] >     {
	I0416 00:30:38.285408   44065 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0416 00:30:38.285414   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.285421   44065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0416 00:30:38.285426   44065 command_runner.go:130] >       ],
	I0416 00:30:38.285432   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.285444   44065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0416 00:30:38.285456   44065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0416 00:30:38.285461   44065 command_runner.go:130] >       ],
	I0416 00:30:38.285468   44065 command_runner.go:130] >       "size": "60724018",
	I0416 00:30:38.285474   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.285479   44065 command_runner.go:130] >         "value": "0"
	I0416 00:30:38.285489   44065 command_runner.go:130] >       },
	I0416 00:30:38.285494   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.285499   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.285510   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.285517   44065 command_runner.go:130] >     },
	I0416 00:30:38.285525   44065 command_runner.go:130] >     {
	I0416 00:30:38.285536   44065 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0416 00:30:38.285545   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.285552   44065 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0416 00:30:38.285561   44065 command_runner.go:130] >       ],
	I0416 00:30:38.285570   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.285577   44065 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0416 00:30:38.285592   44065 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0416 00:30:38.285598   44065 command_runner.go:130] >       ],
	I0416 00:30:38.285606   44065 command_runner.go:130] >       "size": "750414",
	I0416 00:30:38.285615   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.285622   44065 command_runner.go:130] >         "value": "65535"
	I0416 00:30:38.285631   44065 command_runner.go:130] >       },
	I0416 00:30:38.285638   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.285647   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.285653   44065 command_runner.go:130] >       "pinned": true
	I0416 00:30:38.285660   44065 command_runner.go:130] >     }
	I0416 00:30:38.285663   44065 command_runner.go:130] >   ]
	I0416 00:30:38.285669   44065 command_runner.go:130] > }
	I0416 00:30:38.285940   44065 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 00:30:38.285957   44065 crio.go:433] Images already preloaded, skipping extraction
	I0416 00:30:38.286010   44065 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:30:38.321415   44065 command_runner.go:130] > {
	I0416 00:30:38.321434   44065 command_runner.go:130] >   "images": [
	I0416 00:30:38.321438   44065 command_runner.go:130] >     {
	I0416 00:30:38.321456   44065 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0416 00:30:38.321463   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.321476   44065 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0416 00:30:38.321482   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321488   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.321499   44065 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0416 00:30:38.321513   44065 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0416 00:30:38.321522   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321528   44065 command_runner.go:130] >       "size": "65291810",
	I0416 00:30:38.321534   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.321540   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.321554   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.321564   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.321569   44065 command_runner.go:130] >     },
	I0416 00:30:38.321577   44065 command_runner.go:130] >     {
	I0416 00:30:38.321586   44065 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0416 00:30:38.321596   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.321602   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0416 00:30:38.321609   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321613   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.321623   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0416 00:30:38.321630   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0416 00:30:38.321636   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321640   44065 command_runner.go:130] >       "size": "1363676",
	I0416 00:30:38.321644   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.321652   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.321658   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.321666   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.321672   44065 command_runner.go:130] >     },
	I0416 00:30:38.321678   44065 command_runner.go:130] >     {
	I0416 00:30:38.321696   44065 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0416 00:30:38.321704   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.321712   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0416 00:30:38.321717   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321721   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.321731   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0416 00:30:38.321740   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0416 00:30:38.321745   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321750   44065 command_runner.go:130] >       "size": "31470524",
	I0416 00:30:38.321756   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.321760   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.321766   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.321770   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.321775   44065 command_runner.go:130] >     },
	I0416 00:30:38.321779   44065 command_runner.go:130] >     {
	I0416 00:30:38.321787   44065 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0416 00:30:38.321793   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.321798   44065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0416 00:30:38.321803   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321807   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.321816   44065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0416 00:30:38.321828   44065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0416 00:30:38.321834   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321838   44065 command_runner.go:130] >       "size": "61245718",
	I0416 00:30:38.321842   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.321845   44065 command_runner.go:130] >       "username": "nonroot",
	I0416 00:30:38.321849   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.321853   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.321856   44065 command_runner.go:130] >     },
	I0416 00:30:38.321860   44065 command_runner.go:130] >     {
	I0416 00:30:38.321866   44065 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0416 00:30:38.321873   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.321878   44065 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0416 00:30:38.321883   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321887   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.321896   44065 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0416 00:30:38.321909   44065 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0416 00:30:38.321916   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321920   44065 command_runner.go:130] >       "size": "150779692",
	I0416 00:30:38.321926   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.321930   44065 command_runner.go:130] >         "value": "0"
	I0416 00:30:38.321935   44065 command_runner.go:130] >       },
	I0416 00:30:38.321939   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.321945   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.321949   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.321955   44065 command_runner.go:130] >     },
	I0416 00:30:38.321958   44065 command_runner.go:130] >     {
	I0416 00:30:38.321966   44065 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0416 00:30:38.321973   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.321978   44065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0416 00:30:38.321983   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321991   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.322000   44065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0416 00:30:38.322008   44065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0416 00:30:38.322013   44065 command_runner.go:130] >       ],
	I0416 00:30:38.322018   44065 command_runner.go:130] >       "size": "128508878",
	I0416 00:30:38.322023   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.322027   44065 command_runner.go:130] >         "value": "0"
	I0416 00:30:38.322033   44065 command_runner.go:130] >       },
	I0416 00:30:38.322037   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.322043   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.322047   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.322052   44065 command_runner.go:130] >     },
	I0416 00:30:38.322056   44065 command_runner.go:130] >     {
	I0416 00:30:38.322064   44065 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0416 00:30:38.322070   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.322076   44065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0416 00:30:38.322081   44065 command_runner.go:130] >       ],
	I0416 00:30:38.322085   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.322095   44065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0416 00:30:38.322102   44065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0416 00:30:38.322108   44065 command_runner.go:130] >       ],
	I0416 00:30:38.322117   44065 command_runner.go:130] >       "size": "123142962",
	I0416 00:30:38.322123   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.322127   44065 command_runner.go:130] >         "value": "0"
	I0416 00:30:38.322133   44065 command_runner.go:130] >       },
	I0416 00:30:38.322137   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.322141   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.322145   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.322148   44065 command_runner.go:130] >     },
	I0416 00:30:38.322151   44065 command_runner.go:130] >     {
	I0416 00:30:38.322157   44065 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0416 00:30:38.322163   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.322168   44065 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0416 00:30:38.322173   44065 command_runner.go:130] >       ],
	I0416 00:30:38.322177   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.322200   44065 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0416 00:30:38.322210   44065 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0416 00:30:38.322213   44065 command_runner.go:130] >       ],
	I0416 00:30:38.322217   44065 command_runner.go:130] >       "size": "83634073",
	I0416 00:30:38.322223   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.322227   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.322233   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.322236   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.322240   44065 command_runner.go:130] >     },
	I0416 00:30:38.322243   44065 command_runner.go:130] >     {
	I0416 00:30:38.322249   44065 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0416 00:30:38.322255   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.322259   44065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0416 00:30:38.322265   44065 command_runner.go:130] >       ],
	I0416 00:30:38.322269   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.322278   44065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0416 00:30:38.322288   44065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0416 00:30:38.322293   44065 command_runner.go:130] >       ],
	I0416 00:30:38.322298   44065 command_runner.go:130] >       "size": "60724018",
	I0416 00:30:38.322304   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.322308   44065 command_runner.go:130] >         "value": "0"
	I0416 00:30:38.322314   44065 command_runner.go:130] >       },
	I0416 00:30:38.322322   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.322328   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.322332   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.322336   44065 command_runner.go:130] >     },
	I0416 00:30:38.322339   44065 command_runner.go:130] >     {
	I0416 00:30:38.322347   44065 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0416 00:30:38.322352   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.322356   44065 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0416 00:30:38.322361   44065 command_runner.go:130] >       ],
	I0416 00:30:38.322365   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.322372   44065 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0416 00:30:38.322381   44065 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0416 00:30:38.322387   44065 command_runner.go:130] >       ],
	I0416 00:30:38.322391   44065 command_runner.go:130] >       "size": "750414",
	I0416 00:30:38.322397   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.322401   44065 command_runner.go:130] >         "value": "65535"
	I0416 00:30:38.322407   44065 command_runner.go:130] >       },
	I0416 00:30:38.322417   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.322423   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.322427   44065 command_runner.go:130] >       "pinned": true
	I0416 00:30:38.322432   44065 command_runner.go:130] >     }
	I0416 00:30:38.322436   44065 command_runner.go:130] >   ]
	I0416 00:30:38.322441   44065 command_runner.go:130] > }
	I0416 00:30:38.322537   44065 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 00:30:38.322547   44065 cache_images.go:84] Images are preloaded, skipping loading
	I0416 00:30:38.322553   44065 kubeadm.go:928] updating node { 192.168.39.140 8443 v1.29.3 crio true true} ...
	I0416 00:30:38.322643   44065 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-414194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-414194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 00:30:38.322709   44065 ssh_runner.go:195] Run: crio config
	I0416 00:30:38.357680   44065 command_runner.go:130] ! time="2024-04-16 00:30:38.339035704Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0416 00:30:38.363049   44065 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0416 00:30:38.370920   44065 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0416 00:30:38.370944   44065 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0416 00:30:38.370950   44065 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0416 00:30:38.370953   44065 command_runner.go:130] > #
	I0416 00:30:38.370960   44065 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0416 00:30:38.370965   44065 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0416 00:30:38.370971   44065 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0416 00:30:38.370985   44065 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0416 00:30:38.370994   44065 command_runner.go:130] > # reload'.
	I0416 00:30:38.371004   44065 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0416 00:30:38.371018   44065 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0416 00:30:38.371031   44065 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0416 00:30:38.371041   44065 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0416 00:30:38.371047   44065 command_runner.go:130] > [crio]
	I0416 00:30:38.371053   44065 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0416 00:30:38.371059   44065 command_runner.go:130] > # containers images, in this directory.
	I0416 00:30:38.371070   44065 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0416 00:30:38.371089   44065 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0416 00:30:38.371101   44065 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0416 00:30:38.371114   44065 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0416 00:30:38.371123   44065 command_runner.go:130] > # imagestore = ""
	I0416 00:30:38.371133   44065 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0416 00:30:38.371144   44065 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0416 00:30:38.371151   44065 command_runner.go:130] > storage_driver = "overlay"
	I0416 00:30:38.371157   44065 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0416 00:30:38.371171   44065 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0416 00:30:38.371182   44065 command_runner.go:130] > storage_option = [
	I0416 00:30:38.371192   44065 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0416 00:30:38.371200   44065 command_runner.go:130] > ]
	I0416 00:30:38.371211   44065 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0416 00:30:38.371223   44065 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0416 00:30:38.371233   44065 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0416 00:30:38.371244   44065 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0416 00:30:38.371251   44065 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0416 00:30:38.371261   44065 command_runner.go:130] > # always happen on a node reboot
	I0416 00:30:38.371272   44065 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0416 00:30:38.371291   44065 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0416 00:30:38.371302   44065 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0416 00:30:38.371310   44065 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0416 00:30:38.371321   44065 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0416 00:30:38.371329   44065 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0416 00:30:38.371341   44065 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0416 00:30:38.371350   44065 command_runner.go:130] > # internal_wipe = true
	I0416 00:30:38.371376   44065 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0416 00:30:38.371388   44065 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0416 00:30:38.371398   44065 command_runner.go:130] > # internal_repair = false
	I0416 00:30:38.371406   44065 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0416 00:30:38.371417   44065 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0416 00:30:38.371427   44065 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0416 00:30:38.371435   44065 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0416 00:30:38.371448   44065 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0416 00:30:38.371457   44065 command_runner.go:130] > [crio.api]
	I0416 00:30:38.371465   44065 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0416 00:30:38.371475   44065 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0416 00:30:38.371491   44065 command_runner.go:130] > # IP address on which the stream server will listen.
	I0416 00:30:38.371500   44065 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0416 00:30:38.371507   44065 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0416 00:30:38.371513   44065 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0416 00:30:38.371519   44065 command_runner.go:130] > # stream_port = "0"
	I0416 00:30:38.371531   44065 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0416 00:30:38.371541   44065 command_runner.go:130] > # stream_enable_tls = false
	I0416 00:30:38.371550   44065 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0416 00:30:38.371560   44065 command_runner.go:130] > # stream_idle_timeout = ""
	I0416 00:30:38.371569   44065 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0416 00:30:38.371578   44065 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0416 00:30:38.371583   44065 command_runner.go:130] > # minutes.
	I0416 00:30:38.371589   44065 command_runner.go:130] > # stream_tls_cert = ""
	I0416 00:30:38.371595   44065 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0416 00:30:38.371603   44065 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0416 00:30:38.371610   44065 command_runner.go:130] > # stream_tls_key = ""
	I0416 00:30:38.371619   44065 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0416 00:30:38.371633   44065 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0416 00:30:38.371657   44065 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0416 00:30:38.371667   44065 command_runner.go:130] > # stream_tls_ca = ""
	I0416 00:30:38.371676   44065 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0416 00:30:38.371683   44065 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0416 00:30:38.371693   44065 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0416 00:30:38.371704   44065 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0416 00:30:38.371718   44065 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0416 00:30:38.371736   44065 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0416 00:30:38.371745   44065 command_runner.go:130] > [crio.runtime]
	I0416 00:30:38.371755   44065 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0416 00:30:38.371764   44065 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0416 00:30:38.371768   44065 command_runner.go:130] > # "nofile=1024:2048"
	I0416 00:30:38.371780   44065 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0416 00:30:38.371787   44065 command_runner.go:130] > # default_ulimits = [
	I0416 00:30:38.371794   44065 command_runner.go:130] > # ]
	I0416 00:30:38.371803   44065 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0416 00:30:38.371812   44065 command_runner.go:130] > # no_pivot = false
	I0416 00:30:38.371821   44065 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0416 00:30:38.371833   44065 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0416 00:30:38.371843   44065 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0416 00:30:38.371851   44065 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0416 00:30:38.371859   44065 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0416 00:30:38.371870   44065 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0416 00:30:38.371880   44065 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0416 00:30:38.371888   44065 command_runner.go:130] > # Cgroup setting for conmon
	I0416 00:30:38.371901   44065 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0416 00:30:38.371910   44065 command_runner.go:130] > conmon_cgroup = "pod"
	I0416 00:30:38.371920   44065 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0416 00:30:38.371931   44065 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0416 00:30:38.371945   44065 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0416 00:30:38.371954   44065 command_runner.go:130] > conmon_env = [
	I0416 00:30:38.371964   44065 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0416 00:30:38.371972   44065 command_runner.go:130] > ]
	I0416 00:30:38.371980   44065 command_runner.go:130] > # Additional environment variables to set for all the
	I0416 00:30:38.371990   44065 command_runner.go:130] > # containers. These are overridden if set in the
	I0416 00:30:38.372002   44065 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0416 00:30:38.372012   44065 command_runner.go:130] > # default_env = [
	I0416 00:30:38.372017   44065 command_runner.go:130] > # ]
	I0416 00:30:38.372026   44065 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0416 00:30:38.372035   44065 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0416 00:30:38.372041   44065 command_runner.go:130] > # selinux = false
	I0416 00:30:38.372051   44065 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0416 00:30:38.372065   44065 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0416 00:30:38.372083   44065 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0416 00:30:38.372093   44065 command_runner.go:130] > # seccomp_profile = ""
	I0416 00:30:38.372105   44065 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0416 00:30:38.372114   44065 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0416 00:30:38.372123   44065 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0416 00:30:38.372133   44065 command_runner.go:130] > # which might increase security.
	I0416 00:30:38.372140   44065 command_runner.go:130] > # This option is currently deprecated,
	I0416 00:30:38.372153   44065 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0416 00:30:38.372163   44065 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0416 00:30:38.372176   44065 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0416 00:30:38.372188   44065 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0416 00:30:38.372198   44065 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0416 00:30:38.372206   44065 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0416 00:30:38.372217   44065 command_runner.go:130] > # This option supports live configuration reload.
	I0416 00:30:38.372234   44065 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0416 00:30:38.372246   44065 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0416 00:30:38.372252   44065 command_runner.go:130] > # the cgroup blockio controller.
	I0416 00:30:38.372262   44065 command_runner.go:130] > # blockio_config_file = ""
	I0416 00:30:38.372272   44065 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0416 00:30:38.372280   44065 command_runner.go:130] > # blockio parameters.
	I0416 00:30:38.372284   44065 command_runner.go:130] > # blockio_reload = false
	I0416 00:30:38.372291   44065 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0416 00:30:38.372297   44065 command_runner.go:130] > # irqbalance daemon.
	I0416 00:30:38.372306   44065 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0416 00:30:38.372319   44065 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0416 00:30:38.372333   44065 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0416 00:30:38.372346   44065 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0416 00:30:38.372356   44065 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0416 00:30:38.372367   44065 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0416 00:30:38.372376   44065 command_runner.go:130] > # This option supports live configuration reload.
	I0416 00:30:38.372383   44065 command_runner.go:130] > # rdt_config_file = ""
	I0416 00:30:38.372395   44065 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0416 00:30:38.372405   44065 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0416 00:30:38.372448   44065 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0416 00:30:38.372455   44065 command_runner.go:130] > # separate_pull_cgroup = ""
	I0416 00:30:38.372462   44065 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0416 00:30:38.372479   44065 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0416 00:30:38.372495   44065 command_runner.go:130] > # will be added.
	I0416 00:30:38.372505   44065 command_runner.go:130] > # default_capabilities = [
	I0416 00:30:38.372511   44065 command_runner.go:130] > # 	"CHOWN",
	I0416 00:30:38.372520   44065 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0416 00:30:38.372527   44065 command_runner.go:130] > # 	"FSETID",
	I0416 00:30:38.372535   44065 command_runner.go:130] > # 	"FOWNER",
	I0416 00:30:38.372541   44065 command_runner.go:130] > # 	"SETGID",
	I0416 00:30:38.372545   44065 command_runner.go:130] > # 	"SETUID",
	I0416 00:30:38.372550   44065 command_runner.go:130] > # 	"SETPCAP",
	I0416 00:30:38.372557   44065 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0416 00:30:38.372566   44065 command_runner.go:130] > # 	"KILL",
	I0416 00:30:38.372571   44065 command_runner.go:130] > # ]
	I0416 00:30:38.372585   44065 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0416 00:30:38.372598   44065 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0416 00:30:38.372608   44065 command_runner.go:130] > # add_inheritable_capabilities = false
	I0416 00:30:38.372620   44065 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0416 00:30:38.372627   44065 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0416 00:30:38.372635   44065 command_runner.go:130] > default_sysctls = [
	I0416 00:30:38.372642   44065 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0416 00:30:38.372649   44065 command_runner.go:130] > ]
	I0416 00:30:38.372657   44065 command_runner.go:130] > # List of devices on the host that a
	I0416 00:30:38.372669   44065 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0416 00:30:38.372678   44065 command_runner.go:130] > # allowed_devices = [
	I0416 00:30:38.372685   44065 command_runner.go:130] > # 	"/dev/fuse",
	I0416 00:30:38.372693   44065 command_runner.go:130] > # ]
	I0416 00:30:38.372701   44065 command_runner.go:130] > # List of additional devices. specified as
	I0416 00:30:38.372713   44065 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0416 00:30:38.372720   44065 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0416 00:30:38.372729   44065 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0416 00:30:38.372739   44065 command_runner.go:130] > # additional_devices = [
	I0416 00:30:38.372744   44065 command_runner.go:130] > # ]
	I0416 00:30:38.372756   44065 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0416 00:30:38.372762   44065 command_runner.go:130] > # cdi_spec_dirs = [
	I0416 00:30:38.372771   44065 command_runner.go:130] > # 	"/etc/cdi",
	I0416 00:30:38.372776   44065 command_runner.go:130] > # 	"/var/run/cdi",
	I0416 00:30:38.372790   44065 command_runner.go:130] > # ]
	I0416 00:30:38.372802   44065 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0416 00:30:38.372812   44065 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0416 00:30:38.372820   44065 command_runner.go:130] > # Defaults to false.
	I0416 00:30:38.372832   44065 command_runner.go:130] > # device_ownership_from_security_context = false
	I0416 00:30:38.372845   44065 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0416 00:30:38.372857   44065 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0416 00:30:38.372863   44065 command_runner.go:130] > # hooks_dir = [
	I0416 00:30:38.372873   44065 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0416 00:30:38.372878   44065 command_runner.go:130] > # ]
	I0416 00:30:38.372886   44065 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0416 00:30:38.372898   44065 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0416 00:30:38.372910   44065 command_runner.go:130] > # its default mounts from the following two files:
	I0416 00:30:38.372918   44065 command_runner.go:130] > #
	I0416 00:30:38.372928   44065 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0416 00:30:38.372940   44065 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0416 00:30:38.372952   44065 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0416 00:30:38.372960   44065 command_runner.go:130] > #
	I0416 00:30:38.372968   44065 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0416 00:30:38.372977   44065 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0416 00:30:38.372987   44065 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0416 00:30:38.372999   44065 command_runner.go:130] > #      only add mounts it finds in this file.
	I0416 00:30:38.373003   44065 command_runner.go:130] > #
	I0416 00:30:38.373010   44065 command_runner.go:130] > # default_mounts_file = ""
	I0416 00:30:38.373018   44065 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0416 00:30:38.373028   44065 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0416 00:30:38.373034   44065 command_runner.go:130] > pids_limit = 1024
	I0416 00:30:38.373051   44065 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0416 00:30:38.373062   44065 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0416 00:30:38.373068   44065 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0416 00:30:38.373083   44065 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0416 00:30:38.373093   44065 command_runner.go:130] > # log_size_max = -1
	I0416 00:30:38.373104   44065 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0416 00:30:38.373114   44065 command_runner.go:130] > # log_to_journald = false
	I0416 00:30:38.373123   44065 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0416 00:30:38.373133   44065 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0416 00:30:38.373148   44065 command_runner.go:130] > # Path to directory for container attach sockets.
	I0416 00:30:38.373168   44065 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0416 00:30:38.373181   44065 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0416 00:30:38.373195   44065 command_runner.go:130] > # bind_mount_prefix = ""
	I0416 00:30:38.373210   44065 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0416 00:30:38.373220   44065 command_runner.go:130] > # read_only = false
	I0416 00:30:38.373232   44065 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0416 00:30:38.373242   44065 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0416 00:30:38.373250   44065 command_runner.go:130] > # live configuration reload.
	I0416 00:30:38.373255   44065 command_runner.go:130] > # log_level = "info"
	I0416 00:30:38.373265   44065 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0416 00:30:38.373277   44065 command_runner.go:130] > # This option supports live configuration reload.
	I0416 00:30:38.373286   44065 command_runner.go:130] > # log_filter = ""
	I0416 00:30:38.373296   44065 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0416 00:30:38.373310   44065 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0416 00:30:38.373319   44065 command_runner.go:130] > # separated by comma.
	I0416 00:30:38.373330   44065 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0416 00:30:38.373338   44065 command_runner.go:130] > # uid_mappings = ""
	I0416 00:30:38.373347   44065 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0416 00:30:38.373360   44065 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0416 00:30:38.373370   44065 command_runner.go:130] > # separated by comma.
	I0416 00:30:38.373382   44065 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0416 00:30:38.373394   44065 command_runner.go:130] > # gid_mappings = ""
	I0416 00:30:38.373404   44065 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0416 00:30:38.373412   44065 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0416 00:30:38.373418   44065 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0416 00:30:38.373428   44065 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0416 00:30:38.373439   44065 command_runner.go:130] > # minimum_mappable_uid = -1
	I0416 00:30:38.373448   44065 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0416 00:30:38.373461   44065 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0416 00:30:38.373471   44065 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0416 00:30:38.373486   44065 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0416 00:30:38.373498   44065 command_runner.go:130] > # minimum_mappable_gid = -1
	I0416 00:30:38.373504   44065 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0416 00:30:38.373516   44065 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0416 00:30:38.373528   44065 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0416 00:30:38.373541   44065 command_runner.go:130] > # ctr_stop_timeout = 30
	I0416 00:30:38.373554   44065 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0416 00:30:38.373564   44065 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0416 00:30:38.373572   44065 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0416 00:30:38.373582   44065 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0416 00:30:38.373586   44065 command_runner.go:130] > drop_infra_ctr = false
	I0416 00:30:38.373597   44065 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0416 00:30:38.373609   44065 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0416 00:30:38.373623   44065 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0416 00:30:38.373638   44065 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0416 00:30:38.373652   44065 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0416 00:30:38.373664   44065 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0416 00:30:38.373671   44065 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0416 00:30:38.373679   44065 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0416 00:30:38.373686   44065 command_runner.go:130] > # shared_cpuset = ""
	I0416 00:30:38.373699   44065 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0416 00:30:38.373710   44065 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0416 00:30:38.373721   44065 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0416 00:30:38.373732   44065 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0416 00:30:38.373742   44065 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0416 00:30:38.373753   44065 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0416 00:30:38.373761   44065 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0416 00:30:38.373768   44065 command_runner.go:130] > # enable_criu_support = false
	I0416 00:30:38.373775   44065 command_runner.go:130] > # Enable/disable the generation of the container,
	I0416 00:30:38.373788   44065 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0416 00:30:38.373798   44065 command_runner.go:130] > # enable_pod_events = false
	I0416 00:30:38.373811   44065 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0416 00:30:38.373823   44065 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0416 00:30:38.373834   44065 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0416 00:30:38.373843   44065 command_runner.go:130] > # default_runtime = "runc"
	I0416 00:30:38.373848   44065 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0416 00:30:38.373858   44065 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0416 00:30:38.373874   44065 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0416 00:30:38.373886   44065 command_runner.go:130] > # creation as a file is not desired either.
	I0416 00:30:38.373901   44065 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0416 00:30:38.373912   44065 command_runner.go:130] > # the hostname is being managed dynamically.
	I0416 00:30:38.373929   44065 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0416 00:30:38.373936   44065 command_runner.go:130] > # ]
	I0416 00:30:38.373944   44065 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0416 00:30:38.373959   44065 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0416 00:30:38.373971   44065 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0416 00:30:38.373983   44065 command_runner.go:130] > # Each entry in the table should follow the format:
	I0416 00:30:38.373990   44065 command_runner.go:130] > #
	I0416 00:30:38.373997   44065 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0416 00:30:38.374008   44065 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0416 00:30:38.374060   44065 command_runner.go:130] > # runtime_type = "oci"
	I0416 00:30:38.374072   44065 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0416 00:30:38.374080   44065 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0416 00:30:38.374087   44065 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0416 00:30:38.374095   44065 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0416 00:30:38.374101   44065 command_runner.go:130] > # monitor_env = []
	I0416 00:30:38.374109   44065 command_runner.go:130] > # privileged_without_host_devices = false
	I0416 00:30:38.374114   44065 command_runner.go:130] > # allowed_annotations = []
	I0416 00:30:38.374124   44065 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0416 00:30:38.374133   44065 command_runner.go:130] > # Where:
	I0416 00:30:38.374142   44065 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0416 00:30:38.374154   44065 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0416 00:30:38.374167   44065 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0416 00:30:38.374179   44065 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0416 00:30:38.374185   44065 command_runner.go:130] > #   in $PATH.
	I0416 00:30:38.374196   44065 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0416 00:30:38.374200   44065 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0416 00:30:38.374217   44065 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0416 00:30:38.374227   44065 command_runner.go:130] > #   state.
	I0416 00:30:38.374238   44065 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0416 00:30:38.374250   44065 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0416 00:30:38.374263   44065 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0416 00:30:38.374274   44065 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0416 00:30:38.374294   44065 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0416 00:30:38.374307   44065 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0416 00:30:38.374317   44065 command_runner.go:130] > #   The currently recognized values are:
	I0416 00:30:38.374329   44065 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0416 00:30:38.374351   44065 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0416 00:30:38.374363   44065 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0416 00:30:38.374373   44065 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0416 00:30:38.374382   44065 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0416 00:30:38.374396   44065 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0416 00:30:38.374409   44065 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0416 00:30:38.374422   44065 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0416 00:30:38.374434   44065 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0416 00:30:38.374447   44065 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0416 00:30:38.374455   44065 command_runner.go:130] > #   deprecated option "conmon".
	I0416 00:30:38.374462   44065 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0416 00:30:38.374472   44065 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0416 00:30:38.374483   44065 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0416 00:30:38.374497   44065 command_runner.go:130] > #   should be moved to the container's cgroup
	I0416 00:30:38.374510   44065 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0416 00:30:38.374521   44065 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0416 00:30:38.374531   44065 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0416 00:30:38.374541   44065 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0416 00:30:38.374544   44065 command_runner.go:130] > #
	I0416 00:30:38.374549   44065 command_runner.go:130] > # Using the seccomp notifier feature:
	I0416 00:30:38.374557   44065 command_runner.go:130] > #
	I0416 00:30:38.374567   44065 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0416 00:30:38.374580   44065 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0416 00:30:38.374585   44065 command_runner.go:130] > #
	I0416 00:30:38.374595   44065 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0416 00:30:38.374608   44065 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0416 00:30:38.374615   44065 command_runner.go:130] > #
	I0416 00:30:38.374625   44065 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0416 00:30:38.374632   44065 command_runner.go:130] > # feature.
	I0416 00:30:38.374635   44065 command_runner.go:130] > #
	I0416 00:30:38.374645   44065 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0416 00:30:38.374658   44065 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0416 00:30:38.374671   44065 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0416 00:30:38.374683   44065 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0416 00:30:38.374692   44065 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0416 00:30:38.374700   44065 command_runner.go:130] > #
	I0416 00:30:38.374714   44065 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0416 00:30:38.374724   44065 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0416 00:30:38.374731   44065 command_runner.go:130] > #
	I0416 00:30:38.374742   44065 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0416 00:30:38.374753   44065 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0416 00:30:38.374761   44065 command_runner.go:130] > #
	I0416 00:30:38.374771   44065 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0416 00:30:38.374784   44065 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0416 00:30:38.374790   44065 command_runner.go:130] > # limitation.
	I0416 00:30:38.374799   44065 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0416 00:30:38.374804   44065 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0416 00:30:38.374812   44065 command_runner.go:130] > runtime_type = "oci"
	I0416 00:30:38.374820   44065 command_runner.go:130] > runtime_root = "/run/runc"
	I0416 00:30:38.374831   44065 command_runner.go:130] > runtime_config_path = ""
	I0416 00:30:38.374839   44065 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0416 00:30:38.374849   44065 command_runner.go:130] > monitor_cgroup = "pod"
	I0416 00:30:38.374855   44065 command_runner.go:130] > monitor_exec_cgroup = ""
	I0416 00:30:38.374864   44065 command_runner.go:130] > monitor_env = [
	I0416 00:30:38.374873   44065 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0416 00:30:38.374880   44065 command_runner.go:130] > ]
	I0416 00:30:38.374886   44065 command_runner.go:130] > privileged_without_host_devices = false
	I0416 00:30:38.374893   44065 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0416 00:30:38.374904   44065 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0416 00:30:38.374918   44065 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0416 00:30:38.374933   44065 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0416 00:30:38.374947   44065 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0416 00:30:38.374958   44065 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0416 00:30:38.374972   44065 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0416 00:30:38.374985   44065 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0416 00:30:38.374994   44065 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0416 00:30:38.375009   44065 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0416 00:30:38.375018   44065 command_runner.go:130] > # Example:
	I0416 00:30:38.375025   44065 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0416 00:30:38.375036   44065 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0416 00:30:38.375044   44065 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0416 00:30:38.375055   44065 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0416 00:30:38.375067   44065 command_runner.go:130] > # cpuset = 0
	I0416 00:30:38.375076   44065 command_runner.go:130] > # cpushares = "0-1"
	I0416 00:30:38.375081   44065 command_runner.go:130] > # Where:
	I0416 00:30:38.375090   44065 command_runner.go:130] > # The workload name is workload-type.
	I0416 00:30:38.375101   44065 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0416 00:30:38.375113   44065 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0416 00:30:38.375123   44065 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0416 00:30:38.375137   44065 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0416 00:30:38.375148   44065 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0416 00:30:38.375157   44065 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0416 00:30:38.375167   44065 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0416 00:30:38.375178   44065 command_runner.go:130] > # Default value is set to true
	I0416 00:30:38.375186   44065 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0416 00:30:38.375197   44065 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0416 00:30:38.375207   44065 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0416 00:30:38.375217   44065 command_runner.go:130] > # Default value is set to 'false'
	I0416 00:30:38.375224   44065 command_runner.go:130] > # disable_hostport_mapping = false
	I0416 00:30:38.375235   44065 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0416 00:30:38.375240   44065 command_runner.go:130] > #
	I0416 00:30:38.375246   44065 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0416 00:30:38.375251   44065 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0416 00:30:38.375257   44065 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0416 00:30:38.375265   44065 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0416 00:30:38.375274   44065 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0416 00:30:38.375279   44065 command_runner.go:130] > [crio.image]
	I0416 00:30:38.375288   44065 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0416 00:30:38.375295   44065 command_runner.go:130] > # default_transport = "docker://"
	I0416 00:30:38.375305   44065 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0416 00:30:38.375314   44065 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0416 00:30:38.375320   44065 command_runner.go:130] > # global_auth_file = ""
	I0416 00:30:38.375328   44065 command_runner.go:130] > # The image used to instantiate infra containers.
	I0416 00:30:38.375335   44065 command_runner.go:130] > # This option supports live configuration reload.
	I0416 00:30:38.375340   44065 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0416 00:30:38.375346   44065 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0416 00:30:38.375351   44065 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0416 00:30:38.375356   44065 command_runner.go:130] > # This option supports live configuration reload.
	I0416 00:30:38.375367   44065 command_runner.go:130] > # pause_image_auth_file = ""
	I0416 00:30:38.375372   44065 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0416 00:30:38.375378   44065 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0416 00:30:38.375383   44065 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0416 00:30:38.375388   44065 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0416 00:30:38.375395   44065 command_runner.go:130] > # pause_command = "/pause"
	I0416 00:30:38.375400   44065 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0416 00:30:38.375405   44065 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0416 00:30:38.375412   44065 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0416 00:30:38.375425   44065 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0416 00:30:38.375434   44065 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0416 00:30:38.375444   44065 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0416 00:30:38.375451   44065 command_runner.go:130] > # pinned_images = [
	I0416 00:30:38.375460   44065 command_runner.go:130] > # ]
	I0416 00:30:38.375470   44065 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0416 00:30:38.375482   44065 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0416 00:30:38.375494   44065 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0416 00:30:38.375503   44065 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0416 00:30:38.375508   44065 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0416 00:30:38.375514   44065 command_runner.go:130] > # signature_policy = ""
	I0416 00:30:38.375520   44065 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0416 00:30:38.375526   44065 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0416 00:30:38.375532   44065 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0416 00:30:38.375538   44065 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0416 00:30:38.375547   44065 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0416 00:30:38.375555   44065 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0416 00:30:38.375560   44065 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0416 00:30:38.375573   44065 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0416 00:30:38.375579   44065 command_runner.go:130] > # changing them here.
	I0416 00:30:38.375583   44065 command_runner.go:130] > # insecure_registries = [
	I0416 00:30:38.375586   44065 command_runner.go:130] > # ]
	I0416 00:30:38.375592   44065 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0416 00:30:38.375599   44065 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0416 00:30:38.375603   44065 command_runner.go:130] > # image_volumes = "mkdir"
	I0416 00:30:38.375609   44065 command_runner.go:130] > # Temporary directory to use for storing big files
	I0416 00:30:38.375613   44065 command_runner.go:130] > # big_files_temporary_dir = ""
	I0416 00:30:38.375625   44065 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0416 00:30:38.375630   44065 command_runner.go:130] > # CNI plugins.
	I0416 00:30:38.375634   44065 command_runner.go:130] > [crio.network]
	I0416 00:30:38.375643   44065 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0416 00:30:38.375656   44065 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0416 00:30:38.375662   44065 command_runner.go:130] > # cni_default_network = ""
	I0416 00:30:38.375667   44065 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0416 00:30:38.375674   44065 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0416 00:30:38.375680   44065 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0416 00:30:38.375685   44065 command_runner.go:130] > # plugin_dirs = [
	I0416 00:30:38.375689   44065 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0416 00:30:38.375694   44065 command_runner.go:130] > # ]
	I0416 00:30:38.375700   44065 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0416 00:30:38.375705   44065 command_runner.go:130] > [crio.metrics]
	I0416 00:30:38.375710   44065 command_runner.go:130] > # Globally enable or disable metrics support.
	I0416 00:30:38.375716   44065 command_runner.go:130] > enable_metrics = true
	I0416 00:30:38.375721   44065 command_runner.go:130] > # Specify enabled metrics collectors.
	I0416 00:30:38.375727   44065 command_runner.go:130] > # Per default all metrics are enabled.
	I0416 00:30:38.375733   44065 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0416 00:30:38.375739   44065 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0416 00:30:38.375745   44065 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0416 00:30:38.375749   44065 command_runner.go:130] > # metrics_collectors = [
	I0416 00:30:38.375755   44065 command_runner.go:130] > # 	"operations",
	I0416 00:30:38.375759   44065 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0416 00:30:38.375763   44065 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0416 00:30:38.375769   44065 command_runner.go:130] > # 	"operations_errors",
	I0416 00:30:38.375773   44065 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0416 00:30:38.375778   44065 command_runner.go:130] > # 	"image_pulls_by_name",
	I0416 00:30:38.375782   44065 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0416 00:30:38.375788   44065 command_runner.go:130] > # 	"image_pulls_failures",
	I0416 00:30:38.375792   44065 command_runner.go:130] > # 	"image_pulls_successes",
	I0416 00:30:38.375797   44065 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0416 00:30:38.375802   44065 command_runner.go:130] > # 	"image_layer_reuse",
	I0416 00:30:38.375806   44065 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0416 00:30:38.375812   44065 command_runner.go:130] > # 	"containers_oom_total",
	I0416 00:30:38.375817   44065 command_runner.go:130] > # 	"containers_oom",
	I0416 00:30:38.375826   44065 command_runner.go:130] > # 	"processes_defunct",
	I0416 00:30:38.375832   44065 command_runner.go:130] > # 	"operations_total",
	I0416 00:30:38.375836   44065 command_runner.go:130] > # 	"operations_latency_seconds",
	I0416 00:30:38.375841   44065 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0416 00:30:38.375847   44065 command_runner.go:130] > # 	"operations_errors_total",
	I0416 00:30:38.375851   44065 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0416 00:30:38.375855   44065 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0416 00:30:38.375861   44065 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0416 00:30:38.375866   44065 command_runner.go:130] > # 	"image_pulls_success_total",
	I0416 00:30:38.375877   44065 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0416 00:30:38.375884   44065 command_runner.go:130] > # 	"containers_oom_count_total",
	I0416 00:30:38.375889   44065 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0416 00:30:38.375895   44065 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0416 00:30:38.375898   44065 command_runner.go:130] > # ]
	I0416 00:30:38.375903   44065 command_runner.go:130] > # The port on which the metrics server will listen.
	I0416 00:30:38.375909   44065 command_runner.go:130] > # metrics_port = 9090
	I0416 00:30:38.375914   44065 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0416 00:30:38.375920   44065 command_runner.go:130] > # metrics_socket = ""
	I0416 00:30:38.375925   44065 command_runner.go:130] > # The certificate for the secure metrics server.
	I0416 00:30:38.375932   44065 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0416 00:30:38.375938   44065 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0416 00:30:38.375945   44065 command_runner.go:130] > # certificate on any modification event.
	I0416 00:30:38.375948   44065 command_runner.go:130] > # metrics_cert = ""
	I0416 00:30:38.375956   44065 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0416 00:30:38.375960   44065 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0416 00:30:38.375966   44065 command_runner.go:130] > # metrics_key = ""
	I0416 00:30:38.375972   44065 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0416 00:30:38.375976   44065 command_runner.go:130] > [crio.tracing]
	I0416 00:30:38.375982   44065 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0416 00:30:38.375988   44065 command_runner.go:130] > # enable_tracing = false
	I0416 00:30:38.375993   44065 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0416 00:30:38.375998   44065 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0416 00:30:38.376005   44065 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0416 00:30:38.376012   44065 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0416 00:30:38.376015   44065 command_runner.go:130] > # CRI-O NRI configuration.
	I0416 00:30:38.376022   44065 command_runner.go:130] > [crio.nri]
	I0416 00:30:38.376034   44065 command_runner.go:130] > # Globally enable or disable NRI.
	I0416 00:30:38.376045   44065 command_runner.go:130] > # enable_nri = false
	I0416 00:30:38.376051   44065 command_runner.go:130] > # NRI socket to listen on.
	I0416 00:30:38.376060   44065 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0416 00:30:38.376064   44065 command_runner.go:130] > # NRI plugin directory to use.
	I0416 00:30:38.376071   44065 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0416 00:30:38.376076   44065 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0416 00:30:38.376083   44065 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0416 00:30:38.376088   44065 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0416 00:30:38.376095   44065 command_runner.go:130] > # nri_disable_connections = false
	I0416 00:30:38.376100   44065 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0416 00:30:38.376106   44065 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0416 00:30:38.376111   44065 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0416 00:30:38.376117   44065 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0416 00:30:38.376122   44065 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0416 00:30:38.376129   44065 command_runner.go:130] > [crio.stats]
	I0416 00:30:38.376134   44065 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0416 00:30:38.376142   44065 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0416 00:30:38.376146   44065 command_runner.go:130] > # stats_collection_period = 0
	I0416 00:30:38.376273   44065 cni.go:84] Creating CNI manager for ""
	I0416 00:30:38.376287   44065 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0416 00:30:38.376298   44065 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 00:30:38.376318   44065 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.140 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-414194 NodeName:multinode-414194 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 00:30:38.376440   44065 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-414194"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 00:30:38.376507   44065 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 00:30:38.386620   44065 command_runner.go:130] > kubeadm
	I0416 00:30:38.386635   44065 command_runner.go:130] > kubectl
	I0416 00:30:38.386639   44065 command_runner.go:130] > kubelet
	I0416 00:30:38.386767   44065 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 00:30:38.386835   44065 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 00:30:38.396229   44065 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0416 00:30:38.414464   44065 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 00:30:38.432702   44065 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0416 00:30:38.450278   44065 ssh_runner.go:195] Run: grep 192.168.39.140	control-plane.minikube.internal$ /etc/hosts
	I0416 00:30:38.454389   44065 command_runner.go:130] > 192.168.39.140	control-plane.minikube.internal
	I0416 00:30:38.454444   44065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:30:38.620821   44065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 00:30:38.674666   44065 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194 for IP: 192.168.39.140
	I0416 00:30:38.674692   44065 certs.go:194] generating shared ca certs ...
	I0416 00:30:38.674713   44065 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:30:38.674896   44065 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 00:30:38.674957   44065 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 00:30:38.674973   44065 certs.go:256] generating profile certs ...
	I0416 00:30:38.675084   44065 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/client.key
	I0416 00:30:38.675158   44065 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/apiserver.key.94aff35d
	I0416 00:30:38.675216   44065 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/proxy-client.key
	I0416 00:30:38.675232   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 00:30:38.675250   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0416 00:30:38.675269   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 00:30:38.675287   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 00:30:38.675308   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 00:30:38.675328   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 00:30:38.675346   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 00:30:38.675366   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 00:30:38.675430   44065 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 00:30:38.675471   44065 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 00:30:38.675487   44065 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 00:30:38.675523   44065 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 00:30:38.675570   44065 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 00:30:38.675603   44065 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 00:30:38.675664   44065 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:30:38.675710   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:30:38.675732   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem -> /usr/share/ca-certificates/14897.pem
	I0416 00:30:38.675753   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /usr/share/ca-certificates/148972.pem
	I0416 00:30:38.676828   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 00:30:38.821905   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 00:30:38.937751   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 00:30:39.083031   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 00:30:39.301749   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 00:30:39.341550   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 00:30:39.556935   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 00:30:39.697846   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 00:30:39.808528   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 00:30:39.860192   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 00:30:39.897801   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 00:30:39.931139   44065 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 00:30:39.955270   44065 ssh_runner.go:195] Run: openssl version
	I0416 00:30:39.961275   44065 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0416 00:30:39.961640   44065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 00:30:39.979087   44065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 00:30:39.984030   44065 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 00:30:39.984408   44065 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 00:30:39.984481   44065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 00:30:39.992982   44065 command_runner.go:130] > 51391683
	I0416 00:30:39.993245   44065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 00:30:40.005200   44065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 00:30:40.023527   44065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 00:30:40.029113   44065 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 00:30:40.029211   44065 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 00:30:40.029275   44065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 00:30:40.035805   44065 command_runner.go:130] > 3ec20f2e
	I0416 00:30:40.035925   44065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 00:30:40.052261   44065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 00:30:40.066328   44065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:30:40.071602   44065 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:30:40.071851   44065 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:30:40.071914   44065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:30:40.080041   44065 command_runner.go:130] > b5213941
	I0416 00:30:40.080542   44065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 00:30:40.095705   44065 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 00:30:40.100499   44065 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 00:30:40.100520   44065 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0416 00:30:40.100526   44065 command_runner.go:130] > Device: 253,1	Inode: 6292486     Links: 1
	I0416 00:30:40.100531   44065 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0416 00:30:40.100538   44065 command_runner.go:130] > Access: 2024-04-16 00:23:56.560356475 +0000
	I0416 00:30:40.100543   44065 command_runner.go:130] > Modify: 2024-04-16 00:23:56.560356475 +0000
	I0416 00:30:40.100548   44065 command_runner.go:130] > Change: 2024-04-16 00:23:56.560356475 +0000
	I0416 00:30:40.100553   44065 command_runner.go:130] >  Birth: 2024-04-16 00:23:56.560356475 +0000
	I0416 00:30:40.100708   44065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 00:30:40.108744   44065 command_runner.go:130] > Certificate will not expire
	I0416 00:30:40.108798   44065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 00:30:40.115937   44065 command_runner.go:130] > Certificate will not expire
	I0416 00:30:40.116230   44065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 00:30:40.123873   44065 command_runner.go:130] > Certificate will not expire
	I0416 00:30:40.124034   44065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 00:30:40.133196   44065 command_runner.go:130] > Certificate will not expire
	I0416 00:30:40.133414   44065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 00:30:40.146774   44065 command_runner.go:130] > Certificate will not expire
	I0416 00:30:40.146865   44065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 00:30:40.153213   44065 command_runner.go:130] > Certificate will not expire
	I0416 00:30:40.153483   44065 kubeadm.go:391] StartCluster: {Name:multinode-414194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-414194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.81 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.64 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:30:40.153629   44065 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 00:30:40.153703   44065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:30:40.227844   44065 command_runner.go:130] > 0f17e92a363098fdeb203b5b88690b26d20ee0163b8c3b5931987e38331d6042
	I0416 00:30:40.227951   44065 command_runner.go:130] > cb28a7e68259cdb451f72bd51f7d647ab66e3b85ce31278e929148b4158bd23e
	I0416 00:30:40.228087   44065 command_runner.go:130] > 6c2cb5115dbfa806690a4c862b1d19a3e52fd71b6eb9e12beb5fb9dc63eaeb79
	I0416 00:30:40.228124   44065 command_runner.go:130] > 707d02997c5056b67d64184224cc9001aa6230ed5898761df3a5558f6da469b1
	I0416 00:30:40.228154   44065 command_runner.go:130] > c97fdfc017a2bc300f5a99f131f6ef95456d6c86023e424eff41cb8b65b77feb
	I0416 00:30:40.228245   44065 command_runner.go:130] > 5f7dc8a7b1688773e7beca7f7620ad25f1d7b0d0535e159594000283c4a92837
	I0416 00:30:40.228361   44065 command_runner.go:130] > d9a2b19294670872a4b4a12394a771b768c57888a157565dea93eb7cd78cebc2
	I0416 00:30:40.228474   44065 command_runner.go:130] > 9a7bc656135f0c8096e1b56fc42acf4f40bb68637358951d60e659d0460de027
	I0416 00:30:40.228621   44065 command_runner.go:130] > a2a1ac389d671ed6ed0bd3f9b99a93dd309a8a21ebd4aa3440f174b176391d24
	I0416 00:30:40.228640   44065 command_runner.go:130] > 16838f443cd34bb9609f80481078d18c59c1f868678f0f0d8d9a1e797a6d1c66
	I0416 00:30:40.228714   44065 command_runner.go:130] > 0e533992dbeeaa8b0a1310ebfd164115d6900369ae6f23f29a9c56bc79d8d3d2
	I0416 00:30:40.228775   44065 command_runner.go:130] > c5743d4076ffb9eb6579c059bfc0cea6f0d15c748843479fb19531a1f04b02a9
	I0416 00:30:40.228842   44065 command_runner.go:130] > f18d5f50d24d1c24cddbaa0a6de3faa8924dcb73302b59d862d487885e7e5cef
	I0416 00:30:40.228907   44065 command_runner.go:130] > 723a4dcdedcb2a36bdf5fc563d509e24ff5f25b28056bc8e19253f1fa6a5c380
	I0416 00:30:40.230530   44065 cri.go:89] found id: "0f17e92a363098fdeb203b5b88690b26d20ee0163b8c3b5931987e38331d6042"
	I0416 00:30:40.230548   44065 cri.go:89] found id: "cb28a7e68259cdb451f72bd51f7d647ab66e3b85ce31278e929148b4158bd23e"
	I0416 00:30:40.230554   44065 cri.go:89] found id: "6c2cb5115dbfa806690a4c862b1d19a3e52fd71b6eb9e12beb5fb9dc63eaeb79"
	I0416 00:30:40.230558   44065 cri.go:89] found id: "707d02997c5056b67d64184224cc9001aa6230ed5898761df3a5558f6da469b1"
	I0416 00:30:40.230562   44065 cri.go:89] found id: "c97fdfc017a2bc300f5a99f131f6ef95456d6c86023e424eff41cb8b65b77feb"
	I0416 00:30:40.230567   44065 cri.go:89] found id: "5f7dc8a7b1688773e7beca7f7620ad25f1d7b0d0535e159594000283c4a92837"
	I0416 00:30:40.230569   44065 cri.go:89] found id: "d9a2b19294670872a4b4a12394a771b768c57888a157565dea93eb7cd78cebc2"
	I0416 00:30:40.230572   44065 cri.go:89] found id: "9a7bc656135f0c8096e1b56fc42acf4f40bb68637358951d60e659d0460de027"
	I0416 00:30:40.230574   44065 cri.go:89] found id: "a2a1ac389d671ed6ed0bd3f9b99a93dd309a8a21ebd4aa3440f174b176391d24"
	I0416 00:30:40.230580   44065 cri.go:89] found id: "16838f443cd34bb9609f80481078d18c59c1f868678f0f0d8d9a1e797a6d1c66"
	I0416 00:30:40.230583   44065 cri.go:89] found id: "0e533992dbeeaa8b0a1310ebfd164115d6900369ae6f23f29a9c56bc79d8d3d2"
	I0416 00:30:40.230585   44065 cri.go:89] found id: "c5743d4076ffb9eb6579c059bfc0cea6f0d15c748843479fb19531a1f04b02a9"
	I0416 00:30:40.230587   44065 cri.go:89] found id: "f18d5f50d24d1c24cddbaa0a6de3faa8924dcb73302b59d862d487885e7e5cef"
	I0416 00:30:40.230590   44065 cri.go:89] found id: "723a4dcdedcb2a36bdf5fc563d509e24ff5f25b28056bc8e19253f1fa6a5c380"
	I0416 00:30:40.230596   44065 cri.go:89] found id: ""
	I0416 00:30:40.230642   44065 ssh_runner.go:195] Run: sudo runc list -f json
	I0416 00:30:40.251059   44065 command_runner.go:130] ! load container 1199477a5e1bde603f9196d87e5b3e814fd696a077091ca14528388a47c54a86: container does not exist
	I0416 00:30:40.256302   44065 command_runner.go:130] ! load container 43de4700108889fe0c8ba2929dcaaffb699c6a202ab55b2d2f1eb2f6c8113b6c: container does not exist
	
	
	==> CRI-O <==
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.001526401Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713227554001499870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60a4be1d-276b-4f00-8be1-15338c69ca69 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.002199610Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22100299-e96f-4f5e-adb8-3e1596e4a7aa name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.002261859Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22100299-e96f-4f5e-adb8-3e1596e4a7aa name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.002639493Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6ef352c32da886c4e9c9c5747fc9da3c5a81e603fcb3df68f6aa02300642476,PodSandboxId:b2689a6dd8047385bd87c5a6320af5a073e0afdb3b1208044d439fbbe75d2ef2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713227491267626158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c46886-a03a-43cd-a9bd-3ce8ea51f3ed,},Annotations:map[string]string{io.kubernetes.container.hash: 4312fd47,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2f88200d8877162cd5050d66172db422c8244e5ad841c7f87c4e6a9ff1d29b,PodSandboxId:fdb3b4294a4123ab28e7b284031daa4b10b98cad5ccf6fb0c11807cc42f0ffc5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713227476454420131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-sgkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b5fef9-7a2b-4e54-bda6-b721112d5496,},Annotations:map[string]string{io.kubernetes.container.hash: d2a6a9d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecdae904548f8af39104f23afa280987370e7c6de4405146fe74d4adc8ea2e,PodSandboxId:c57a4cde2125e1f4910e44b8e72f5386d73c63e8bb054f8880b9ef6e4f247aa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713227472562730811,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdbff0d-43d4-45f6-81e8-cbe13209d1a6,},Annotations:map[string]string{io.kubernetes.container.hash: 5fd9d0de,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2737380ae428288236a91d128cf8678a564ea5e5c92710aef92689fdb263dae0,PodSandboxId:41e9aff954221cb3bd05fe62c62e21e55c17f283ff35c51058f03fd3ddf0256e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713227472594525235,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rb5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4942567c-fbdf-4a3e-9b78-6ca67f7401c4,},Annotations:map[string]string{io.kubernetes.container.hash: d89b44bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},
{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d245241ef5b83c185c6d37a3b77ec510f66d9cbfc8b3ee28ab93535d56219874,PodSandboxId:b2689a6dd8047385bd87c5a6320af5a073e0afdb3b1208044d439fbbe75d2ef2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713227472579875266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c46886-a03a-43cd-a9bd-3ce8ea51f3ed,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 4312fd47,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c4663335b279485a3ebb281532e482a104e080a9284d76a648c00269233cbb,PodSandboxId:f66a04b4ba4d9655e27ad60cba19d92374399e9f7f8b3bc2074387f858473d2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713227468896229972,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1264c74175197ca6cd421c033473ff23,},Annotations:map[string]strin
g{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb348f36dc509e79515a270e877a7510e598d118a29cb47ef41895788b779c8,PodSandboxId:2bf3ff189bfb483f8f1846af0e06eca5ae337196231ce061c678e23b1a30dc0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713227468893648820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de5ba43fcd25edd8391f4b2e93c4b09,},Annotations:map
[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f865fc38c7e05fae5207cd27288c53bd9b1f203382b0f691dc6e05c6b7b3ab17,PodSandboxId:70a7acf6626dd5c8242ba73fd5650c263b80507bc5368214d5f5488aaba486a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713227468878313700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ae75e3b41ddee4e55353a2f6260637,},Annotations:map[string]string
{io.kubernetes.container.hash: 1bf99c3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250f4d2b523d67aa310faaa736ac218abede0483f0090eec335a62d5d91e8010,PodSandboxId:e78ae9dc879b0a4e92faf3727d6a9a9fb6ff95a4ced62c0a82cd9e34aaa232b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713227468866033920,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d136b84a5ace080d7204bad5b555f4,},Annotations:map[string]string{io.kubernetes.container.hash: 37eb4e31,io.k
ubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ff4e26cfc61b8d5b0260e9eb83770cbc91236f8244f5ae0b56390d752d1241,PodSandboxId:35e4d91aed1d325efb701a63d12ea24ef5c1b1c236a5f73eec20549bd1e431de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713227462940440316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkn5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7877a83-35ca-4241-a233-283ab4a3e4ae,},Annotations:map[string]string{io.kubernetes.container.hash: e10c48ce,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1199477a5e1bde603f9196d87e5b3e814fd696a077091ca14528388a47c54a86,PodSandboxId:35e4d91aed1d325efb701a63d12ea24ef5c1b1c236a5f73eec20549bd1e431de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713227439754387424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkn5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7877a83-35ca-4241-a233-283ab4a3e4ae,},Annotations:map[string]string{io.kubernetes.container.hash: e10c48ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f17e92a363098fdeb203b5b88690b26d20ee0163b8c3b5931987e38331d6042,PodSandboxId:41e9aff954221cb3bd05fe62c62e21e55c17f283ff35c51058f03fd3ddf0256e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713227439405984513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rb5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4942567c-fbdf-4a3e-9b78-6ca67f7401c4,},Annotations:map[string]string{io.kubernetes.container.hash: d89b44bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb28a7e68259cdb451f72bd51f7d647ab66e3b85ce31278e929148b4158bd23e,PodSandboxId:c57a4cde2125e1f4910e44b8e72f5386d73c63e8bb054f8880b9ef6e4f247aa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713227439300174812,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdbff0d-43d4-45f6-8
1e8-cbe13209d1a6,},Annotations:map[string]string{io.kubernetes.container.hash: 5fd9d0de,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2cb5115dbfa806690a4c862b1d19a3e52fd71b6eb9e12beb5fb9dc63eaeb79,PodSandboxId:70a7acf6626dd5c8242ba73fd5650c263b80507bc5368214d5f5488aaba486a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713227439184602672,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ae75e3b41ddee4e55353a2f626063
7,},Annotations:map[string]string{io.kubernetes.container.hash: 1bf99c3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707d02997c5056b67d64184224cc9001aa6230ed5898761df3a5558f6da469b1,PodSandboxId:f66a04b4ba4d9655e27ad60cba19d92374399e9f7f8b3bc2074387f858473d2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1713227439179994867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1264c74175197ca6cd421c033473ff23,},Annotations
:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97fdfc017a2bc300f5a99f131f6ef95456d6c86023e424eff41cb8b65b77feb,PodSandboxId:e78ae9dc879b0a4e92faf3727d6a9a9fb6ff95a4ced62c0a82cd9e34aaa232b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713227439101595370,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d136b84a5ace080d7204bad5b555f4,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 37eb4e31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7dc8a7b1688773e7beca7f7620ad25f1d7b0d0535e159594000283c4a92837,PodSandboxId:2bf3ff189bfb483f8f1846af0e06eca5ae337196231ce061c678e23b1a30dc0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713227438920524155,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de5ba43fcd25edd8391f4b2e93c4b09,},Annotations:map[string]string{io.kubernetes.
container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a3f1cb13bdbfa0468c8ad861c695760fe525774e8f1fc5eb153b77f3b4e350,PodSandboxId:9384fbc0adcf7e21b22fd218db6be959af37dca2db42c105f595c0147322d1d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713227138698384937,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-sgkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b5fef9-7a2b-4e54-bda6-b721112d5496,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d2a6a9d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22100299-e96f-4f5e-adb8-3e1596e4a7aa name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.047215083Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d92203ff-12b6-45bf-a83f-fa9f3a1eedb8 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.047318058Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d92203ff-12b6-45bf-a83f-fa9f3a1eedb8 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.048698183Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=67b55152-dde1-4f44-bac8-f865b0c126a5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.049458792Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713227554049432890,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67b55152-dde1-4f44-bac8-f865b0c126a5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.050072109Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89741a49-02ed-4cf2-9e1e-05994ed1201a name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.050135423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89741a49-02ed-4cf2-9e1e-05994ed1201a name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.050752619Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6ef352c32da886c4e9c9c5747fc9da3c5a81e603fcb3df68f6aa02300642476,PodSandboxId:b2689a6dd8047385bd87c5a6320af5a073e0afdb3b1208044d439fbbe75d2ef2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713227491267626158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c46886-a03a-43cd-a9bd-3ce8ea51f3ed,},Annotations:map[string]string{io.kubernetes.container.hash: 4312fd47,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2f88200d8877162cd5050d66172db422c8244e5ad841c7f87c4e6a9ff1d29b,PodSandboxId:fdb3b4294a4123ab28e7b284031daa4b10b98cad5ccf6fb0c11807cc42f0ffc5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713227476454420131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-sgkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b5fef9-7a2b-4e54-bda6-b721112d5496,},Annotations:map[string]string{io.kubernetes.container.hash: d2a6a9d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecdae904548f8af39104f23afa280987370e7c6de4405146fe74d4adc8ea2e,PodSandboxId:c57a4cde2125e1f4910e44b8e72f5386d73c63e8bb054f8880b9ef6e4f247aa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713227472562730811,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdbff0d-43d4-45f6-81e8-cbe13209d1a6,},Annotations:map[string]string{io.kubernetes.container.hash: 5fd9d0de,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2737380ae428288236a91d128cf8678a564ea5e5c92710aef92689fdb263dae0,PodSandboxId:41e9aff954221cb3bd05fe62c62e21e55c17f283ff35c51058f03fd3ddf0256e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713227472594525235,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rb5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4942567c-fbdf-4a3e-9b78-6ca67f7401c4,},Annotations:map[string]string{io.kubernetes.container.hash: d89b44bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},
{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d245241ef5b83c185c6d37a3b77ec510f66d9cbfc8b3ee28ab93535d56219874,PodSandboxId:b2689a6dd8047385bd87c5a6320af5a073e0afdb3b1208044d439fbbe75d2ef2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713227472579875266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c46886-a03a-43cd-a9bd-3ce8ea51f3ed,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 4312fd47,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c4663335b279485a3ebb281532e482a104e080a9284d76a648c00269233cbb,PodSandboxId:f66a04b4ba4d9655e27ad60cba19d92374399e9f7f8b3bc2074387f858473d2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713227468896229972,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1264c74175197ca6cd421c033473ff23,},Annotations:map[string]strin
g{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb348f36dc509e79515a270e877a7510e598d118a29cb47ef41895788b779c8,PodSandboxId:2bf3ff189bfb483f8f1846af0e06eca5ae337196231ce061c678e23b1a30dc0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713227468893648820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de5ba43fcd25edd8391f4b2e93c4b09,},Annotations:map
[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f865fc38c7e05fae5207cd27288c53bd9b1f203382b0f691dc6e05c6b7b3ab17,PodSandboxId:70a7acf6626dd5c8242ba73fd5650c263b80507bc5368214d5f5488aaba486a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713227468878313700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ae75e3b41ddee4e55353a2f6260637,},Annotations:map[string]string
{io.kubernetes.container.hash: 1bf99c3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250f4d2b523d67aa310faaa736ac218abede0483f0090eec335a62d5d91e8010,PodSandboxId:e78ae9dc879b0a4e92faf3727d6a9a9fb6ff95a4ced62c0a82cd9e34aaa232b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713227468866033920,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d136b84a5ace080d7204bad5b555f4,},Annotations:map[string]string{io.kubernetes.container.hash: 37eb4e31,io.k
ubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ff4e26cfc61b8d5b0260e9eb83770cbc91236f8244f5ae0b56390d752d1241,PodSandboxId:35e4d91aed1d325efb701a63d12ea24ef5c1b1c236a5f73eec20549bd1e431de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713227462940440316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkn5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7877a83-35ca-4241-a233-283ab4a3e4ae,},Annotations:map[string]string{io.kubernetes.container.hash: e10c48ce,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1199477a5e1bde603f9196d87e5b3e814fd696a077091ca14528388a47c54a86,PodSandboxId:35e4d91aed1d325efb701a63d12ea24ef5c1b1c236a5f73eec20549bd1e431de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713227439754387424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkn5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7877a83-35ca-4241-a233-283ab4a3e4ae,},Annotations:map[string]string{io.kubernetes.container.hash: e10c48ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f17e92a363098fdeb203b5b88690b26d20ee0163b8c3b5931987e38331d6042,PodSandboxId:41e9aff954221cb3bd05fe62c62e21e55c17f283ff35c51058f03fd3ddf0256e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713227439405984513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rb5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4942567c-fbdf-4a3e-9b78-6ca67f7401c4,},Annotations:map[string]string{io.kubernetes.container.hash: d89b44bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb28a7e68259cdb451f72bd51f7d647ab66e3b85ce31278e929148b4158bd23e,PodSandboxId:c57a4cde2125e1f4910e44b8e72f5386d73c63e8bb054f8880b9ef6e4f247aa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713227439300174812,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdbff0d-43d4-45f6-8
1e8-cbe13209d1a6,},Annotations:map[string]string{io.kubernetes.container.hash: 5fd9d0de,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2cb5115dbfa806690a4c862b1d19a3e52fd71b6eb9e12beb5fb9dc63eaeb79,PodSandboxId:70a7acf6626dd5c8242ba73fd5650c263b80507bc5368214d5f5488aaba486a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713227439184602672,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ae75e3b41ddee4e55353a2f626063
7,},Annotations:map[string]string{io.kubernetes.container.hash: 1bf99c3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707d02997c5056b67d64184224cc9001aa6230ed5898761df3a5558f6da469b1,PodSandboxId:f66a04b4ba4d9655e27ad60cba19d92374399e9f7f8b3bc2074387f858473d2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1713227439179994867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1264c74175197ca6cd421c033473ff23,},Annotations
:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97fdfc017a2bc300f5a99f131f6ef95456d6c86023e424eff41cb8b65b77feb,PodSandboxId:e78ae9dc879b0a4e92faf3727d6a9a9fb6ff95a4ced62c0a82cd9e34aaa232b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713227439101595370,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d136b84a5ace080d7204bad5b555f4,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 37eb4e31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7dc8a7b1688773e7beca7f7620ad25f1d7b0d0535e159594000283c4a92837,PodSandboxId:2bf3ff189bfb483f8f1846af0e06eca5ae337196231ce061c678e23b1a30dc0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713227438920524155,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de5ba43fcd25edd8391f4b2e93c4b09,},Annotations:map[string]string{io.kubernetes.
container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a3f1cb13bdbfa0468c8ad861c695760fe525774e8f1fc5eb153b77f3b4e350,PodSandboxId:9384fbc0adcf7e21b22fd218db6be959af37dca2db42c105f595c0147322d1d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713227138698384937,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-sgkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b5fef9-7a2b-4e54-bda6-b721112d5496,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d2a6a9d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89741a49-02ed-4cf2-9e1e-05994ed1201a name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.093444266Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8cff7e2-d92b-4a11-94b7-c48d69e6b106 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.093518201Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8cff7e2-d92b-4a11-94b7-c48d69e6b106 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.095307291Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6262e996-d43b-422a-9c93-13072117ebde name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.095683659Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713227554095664716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6262e996-d43b-422a-9c93-13072117ebde name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.096266924Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0eecca63-4662-44f5-be80-5624e746a24b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.096318038Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0eecca63-4662-44f5-be80-5624e746a24b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.096678207Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6ef352c32da886c4e9c9c5747fc9da3c5a81e603fcb3df68f6aa02300642476,PodSandboxId:b2689a6dd8047385bd87c5a6320af5a073e0afdb3b1208044d439fbbe75d2ef2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713227491267626158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c46886-a03a-43cd-a9bd-3ce8ea51f3ed,},Annotations:map[string]string{io.kubernetes.container.hash: 4312fd47,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2f88200d8877162cd5050d66172db422c8244e5ad841c7f87c4e6a9ff1d29b,PodSandboxId:fdb3b4294a4123ab28e7b284031daa4b10b98cad5ccf6fb0c11807cc42f0ffc5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713227476454420131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-sgkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b5fef9-7a2b-4e54-bda6-b721112d5496,},Annotations:map[string]string{io.kubernetes.container.hash: d2a6a9d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecdae904548f8af39104f23afa280987370e7c6de4405146fe74d4adc8ea2e,PodSandboxId:c57a4cde2125e1f4910e44b8e72f5386d73c63e8bb054f8880b9ef6e4f247aa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713227472562730811,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdbff0d-43d4-45f6-81e8-cbe13209d1a6,},Annotations:map[string]string{io.kubernetes.container.hash: 5fd9d0de,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2737380ae428288236a91d128cf8678a564ea5e5c92710aef92689fdb263dae0,PodSandboxId:41e9aff954221cb3bd05fe62c62e21e55c17f283ff35c51058f03fd3ddf0256e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713227472594525235,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rb5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4942567c-fbdf-4a3e-9b78-6ca67f7401c4,},Annotations:map[string]string{io.kubernetes.container.hash: d89b44bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},
{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d245241ef5b83c185c6d37a3b77ec510f66d9cbfc8b3ee28ab93535d56219874,PodSandboxId:b2689a6dd8047385bd87c5a6320af5a073e0afdb3b1208044d439fbbe75d2ef2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713227472579875266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c46886-a03a-43cd-a9bd-3ce8ea51f3ed,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 4312fd47,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c4663335b279485a3ebb281532e482a104e080a9284d76a648c00269233cbb,PodSandboxId:f66a04b4ba4d9655e27ad60cba19d92374399e9f7f8b3bc2074387f858473d2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713227468896229972,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1264c74175197ca6cd421c033473ff23,},Annotations:map[string]strin
g{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb348f36dc509e79515a270e877a7510e598d118a29cb47ef41895788b779c8,PodSandboxId:2bf3ff189bfb483f8f1846af0e06eca5ae337196231ce061c678e23b1a30dc0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713227468893648820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de5ba43fcd25edd8391f4b2e93c4b09,},Annotations:map
[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f865fc38c7e05fae5207cd27288c53bd9b1f203382b0f691dc6e05c6b7b3ab17,PodSandboxId:70a7acf6626dd5c8242ba73fd5650c263b80507bc5368214d5f5488aaba486a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713227468878313700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ae75e3b41ddee4e55353a2f6260637,},Annotations:map[string]string
{io.kubernetes.container.hash: 1bf99c3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250f4d2b523d67aa310faaa736ac218abede0483f0090eec335a62d5d91e8010,PodSandboxId:e78ae9dc879b0a4e92faf3727d6a9a9fb6ff95a4ced62c0a82cd9e34aaa232b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713227468866033920,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d136b84a5ace080d7204bad5b555f4,},Annotations:map[string]string{io.kubernetes.container.hash: 37eb4e31,io.k
ubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ff4e26cfc61b8d5b0260e9eb83770cbc91236f8244f5ae0b56390d752d1241,PodSandboxId:35e4d91aed1d325efb701a63d12ea24ef5c1b1c236a5f73eec20549bd1e431de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713227462940440316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkn5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7877a83-35ca-4241-a233-283ab4a3e4ae,},Annotations:map[string]string{io.kubernetes.container.hash: e10c48ce,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1199477a5e1bde603f9196d87e5b3e814fd696a077091ca14528388a47c54a86,PodSandboxId:35e4d91aed1d325efb701a63d12ea24ef5c1b1c236a5f73eec20549bd1e431de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713227439754387424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkn5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7877a83-35ca-4241-a233-283ab4a3e4ae,},Annotations:map[string]string{io.kubernetes.container.hash: e10c48ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f17e92a363098fdeb203b5b88690b26d20ee0163b8c3b5931987e38331d6042,PodSandboxId:41e9aff954221cb3bd05fe62c62e21e55c17f283ff35c51058f03fd3ddf0256e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713227439405984513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rb5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4942567c-fbdf-4a3e-9b78-6ca67f7401c4,},Annotations:map[string]string{io.kubernetes.container.hash: d89b44bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb28a7e68259cdb451f72bd51f7d647ab66e3b85ce31278e929148b4158bd23e,PodSandboxId:c57a4cde2125e1f4910e44b8e72f5386d73c63e8bb054f8880b9ef6e4f247aa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713227439300174812,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdbff0d-43d4-45f6-8
1e8-cbe13209d1a6,},Annotations:map[string]string{io.kubernetes.container.hash: 5fd9d0de,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2cb5115dbfa806690a4c862b1d19a3e52fd71b6eb9e12beb5fb9dc63eaeb79,PodSandboxId:70a7acf6626dd5c8242ba73fd5650c263b80507bc5368214d5f5488aaba486a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713227439184602672,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ae75e3b41ddee4e55353a2f626063
7,},Annotations:map[string]string{io.kubernetes.container.hash: 1bf99c3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707d02997c5056b67d64184224cc9001aa6230ed5898761df3a5558f6da469b1,PodSandboxId:f66a04b4ba4d9655e27ad60cba19d92374399e9f7f8b3bc2074387f858473d2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1713227439179994867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1264c74175197ca6cd421c033473ff23,},Annotations
:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97fdfc017a2bc300f5a99f131f6ef95456d6c86023e424eff41cb8b65b77feb,PodSandboxId:e78ae9dc879b0a4e92faf3727d6a9a9fb6ff95a4ced62c0a82cd9e34aaa232b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713227439101595370,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d136b84a5ace080d7204bad5b555f4,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 37eb4e31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7dc8a7b1688773e7beca7f7620ad25f1d7b0d0535e159594000283c4a92837,PodSandboxId:2bf3ff189bfb483f8f1846af0e06eca5ae337196231ce061c678e23b1a30dc0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713227438920524155,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de5ba43fcd25edd8391f4b2e93c4b09,},Annotations:map[string]string{io.kubernetes.
container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a3f1cb13bdbfa0468c8ad861c695760fe525774e8f1fc5eb153b77f3b4e350,PodSandboxId:9384fbc0adcf7e21b22fd218db6be959af37dca2db42c105f595c0147322d1d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713227138698384937,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-sgkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b5fef9-7a2b-4e54-bda6-b721112d5496,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d2a6a9d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0eecca63-4662-44f5-be80-5624e746a24b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.144831202Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=445e0958-d07d-4bde-9aeb-61120b18c065 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.144966875Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=445e0958-d07d-4bde-9aeb-61120b18c065 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.147879715Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=08382ef5-2e99-42cd-863f-5a97c7b4e37f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.148287492Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713227554148260645,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=08382ef5-2e99-42cd-863f-5a97c7b4e37f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.149200396Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41c76b71-743c-4fa0-a133-6497b257ab69 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.149260748Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41c76b71-743c-4fa0-a133-6497b257ab69 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:32:34 multinode-414194 crio[2930]: time="2024-04-16 00:32:34.149595038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6ef352c32da886c4e9c9c5747fc9da3c5a81e603fcb3df68f6aa02300642476,PodSandboxId:b2689a6dd8047385bd87c5a6320af5a073e0afdb3b1208044d439fbbe75d2ef2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713227491267626158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c46886-a03a-43cd-a9bd-3ce8ea51f3ed,},Annotations:map[string]string{io.kubernetes.container.hash: 4312fd47,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2f88200d8877162cd5050d66172db422c8244e5ad841c7f87c4e6a9ff1d29b,PodSandboxId:fdb3b4294a4123ab28e7b284031daa4b10b98cad5ccf6fb0c11807cc42f0ffc5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713227476454420131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-sgkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b5fef9-7a2b-4e54-bda6-b721112d5496,},Annotations:map[string]string{io.kubernetes.container.hash: d2a6a9d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecdae904548f8af39104f23afa280987370e7c6de4405146fe74d4adc8ea2e,PodSandboxId:c57a4cde2125e1f4910e44b8e72f5386d73c63e8bb054f8880b9ef6e4f247aa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713227472562730811,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdbff0d-43d4-45f6-81e8-cbe13209d1a6,},Annotations:map[string]string{io.kubernetes.container.hash: 5fd9d0de,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2737380ae428288236a91d128cf8678a564ea5e5c92710aef92689fdb263dae0,PodSandboxId:41e9aff954221cb3bd05fe62c62e21e55c17f283ff35c51058f03fd3ddf0256e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713227472594525235,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rb5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4942567c-fbdf-4a3e-9b78-6ca67f7401c4,},Annotations:map[string]string{io.kubernetes.container.hash: d89b44bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},
{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d245241ef5b83c185c6d37a3b77ec510f66d9cbfc8b3ee28ab93535d56219874,PodSandboxId:b2689a6dd8047385bd87c5a6320af5a073e0afdb3b1208044d439fbbe75d2ef2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713227472579875266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c46886-a03a-43cd-a9bd-3ce8ea51f3ed,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 4312fd47,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c4663335b279485a3ebb281532e482a104e080a9284d76a648c00269233cbb,PodSandboxId:f66a04b4ba4d9655e27ad60cba19d92374399e9f7f8b3bc2074387f858473d2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713227468896229972,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1264c74175197ca6cd421c033473ff23,},Annotations:map[string]strin
g{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb348f36dc509e79515a270e877a7510e598d118a29cb47ef41895788b779c8,PodSandboxId:2bf3ff189bfb483f8f1846af0e06eca5ae337196231ce061c678e23b1a30dc0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713227468893648820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de5ba43fcd25edd8391f4b2e93c4b09,},Annotations:map
[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f865fc38c7e05fae5207cd27288c53bd9b1f203382b0f691dc6e05c6b7b3ab17,PodSandboxId:70a7acf6626dd5c8242ba73fd5650c263b80507bc5368214d5f5488aaba486a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713227468878313700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ae75e3b41ddee4e55353a2f6260637,},Annotations:map[string]string
{io.kubernetes.container.hash: 1bf99c3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250f4d2b523d67aa310faaa736ac218abede0483f0090eec335a62d5d91e8010,PodSandboxId:e78ae9dc879b0a4e92faf3727d6a9a9fb6ff95a4ced62c0a82cd9e34aaa232b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713227468866033920,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d136b84a5ace080d7204bad5b555f4,},Annotations:map[string]string{io.kubernetes.container.hash: 37eb4e31,io.k
ubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ff4e26cfc61b8d5b0260e9eb83770cbc91236f8244f5ae0b56390d752d1241,PodSandboxId:35e4d91aed1d325efb701a63d12ea24ef5c1b1c236a5f73eec20549bd1e431de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713227462940440316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkn5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7877a83-35ca-4241-a233-283ab4a3e4ae,},Annotations:map[string]string{io.kubernetes.container.hash: e10c48ce,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1199477a5e1bde603f9196d87e5b3e814fd696a077091ca14528388a47c54a86,PodSandboxId:35e4d91aed1d325efb701a63d12ea24ef5c1b1c236a5f73eec20549bd1e431de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713227439754387424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkn5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7877a83-35ca-4241-a233-283ab4a3e4ae,},Annotations:map[string]string{io.kubernetes.container.hash: e10c48ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f17e92a363098fdeb203b5b88690b26d20ee0163b8c3b5931987e38331d6042,PodSandboxId:41e9aff954221cb3bd05fe62c62e21e55c17f283ff35c51058f03fd3ddf0256e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713227439405984513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rb5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4942567c-fbdf-4a3e-9b78-6ca67f7401c4,},Annotations:map[string]string{io.kubernetes.container.hash: d89b44bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb28a7e68259cdb451f72bd51f7d647ab66e3b85ce31278e929148b4158bd23e,PodSandboxId:c57a4cde2125e1f4910e44b8e72f5386d73c63e8bb054f8880b9ef6e4f247aa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713227439300174812,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdbff0d-43d4-45f6-8
1e8-cbe13209d1a6,},Annotations:map[string]string{io.kubernetes.container.hash: 5fd9d0de,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2cb5115dbfa806690a4c862b1d19a3e52fd71b6eb9e12beb5fb9dc63eaeb79,PodSandboxId:70a7acf6626dd5c8242ba73fd5650c263b80507bc5368214d5f5488aaba486a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713227439184602672,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ae75e3b41ddee4e55353a2f626063
7,},Annotations:map[string]string{io.kubernetes.container.hash: 1bf99c3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707d02997c5056b67d64184224cc9001aa6230ed5898761df3a5558f6da469b1,PodSandboxId:f66a04b4ba4d9655e27ad60cba19d92374399e9f7f8b3bc2074387f858473d2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1713227439179994867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1264c74175197ca6cd421c033473ff23,},Annotations
:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97fdfc017a2bc300f5a99f131f6ef95456d6c86023e424eff41cb8b65b77feb,PodSandboxId:e78ae9dc879b0a4e92faf3727d6a9a9fb6ff95a4ced62c0a82cd9e34aaa232b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713227439101595370,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d136b84a5ace080d7204bad5b555f4,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 37eb4e31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7dc8a7b1688773e7beca7f7620ad25f1d7b0d0535e159594000283c4a92837,PodSandboxId:2bf3ff189bfb483f8f1846af0e06eca5ae337196231ce061c678e23b1a30dc0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713227438920524155,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de5ba43fcd25edd8391f4b2e93c4b09,},Annotations:map[string]string{io.kubernetes.
container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a3f1cb13bdbfa0468c8ad861c695760fe525774e8f1fc5eb153b77f3b4e350,PodSandboxId:9384fbc0adcf7e21b22fd218db6be959af37dca2db42c105f595c0147322d1d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713227138698384937,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-sgkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b5fef9-7a2b-4e54-bda6-b721112d5496,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d2a6a9d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41c76b71-743c-4fa0-a133-6497b257ab69 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d6ef352c32da8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   b2689a6dd8047       storage-provisioner
	0f2f88200d887       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   fdb3b4294a412       busybox-7fdf7869d9-sgkx5
	2737380ae4282       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   2                   41e9aff954221       coredns-76f75df574-rb5mm
	d245241ef5b83       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   b2689a6dd8047       storage-provisioner
	08ecdae904548       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               2                   c57a4cde2125e       kindnet-pd9pv
	c8c4663335b27       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      About a minute ago   Running             kube-scheduler            2                   f66a04b4ba4d9       kube-scheduler-multinode-414194
	0eb348f36dc50       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      About a minute ago   Running             kube-controller-manager   2                   2bf3ff189bfb4       kube-controller-manager-multinode-414194
	f865fc38c7e05       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      About a minute ago   Running             kube-apiserver            2                   70a7acf6626dd       kube-apiserver-multinode-414194
	250f4d2b523d6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      2                   e78ae9dc879b0       etcd-multinode-414194
	c0ff4e26cfc61       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      About a minute ago   Running             kube-proxy                2                   35e4d91aed1d3       kube-proxy-pkn5q
	1199477a5e1bd       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      About a minute ago   Exited              kube-proxy                1                   35e4d91aed1d3       kube-proxy-pkn5q
	0f17e92a36309       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Exited              coredns                   1                   41e9aff954221       coredns-76f75df574-rb5mm
	cb28a7e68259c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Exited              kindnet-cni               1                   c57a4cde2125e       kindnet-pd9pv
	6c2cb5115dbfa       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      About a minute ago   Exited              kube-apiserver            1                   70a7acf6626dd       kube-apiserver-multinode-414194
	707d02997c505       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      About a minute ago   Exited              kube-scheduler            1                   f66a04b4ba4d9       kube-scheduler-multinode-414194
	c97fdfc017a2b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Exited              etcd                      1                   e78ae9dc879b0       etcd-multinode-414194
	5f7dc8a7b1688       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      About a minute ago   Exited              kube-controller-manager   1                   2bf3ff189bfb4       kube-controller-manager-multinode-414194
	38a3f1cb13bdb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   9384fbc0adcf7       busybox-7fdf7869d9-sgkx5
	
	
	==> coredns [0f17e92a363098fdeb203b5b88690b26d20ee0163b8c3b5931987e38331d6042] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36259 - 39753 "HINFO IN 1965056522265378900.7264945599064591086. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010955615s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [2737380ae428288236a91d128cf8678a564ea5e5c92710aef92689fdb263dae0] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40690 - 19952 "HINFO IN 8961242322719970751.7987005008998165502. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009987741s
	
	
	==> describe nodes <==
	Name:               multinode-414194
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-414194
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=multinode-414194
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T00_24_07_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 00:24:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-414194
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:32:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 00:31:12 +0000   Tue, 16 Apr 2024 00:24:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 00:31:12 +0000   Tue, 16 Apr 2024 00:24:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 00:31:12 +0000   Tue, 16 Apr 2024 00:24:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 00:31:12 +0000   Tue, 16 Apr 2024 00:24:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.140
	  Hostname:    multinode-414194
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2305a4e47b6400c858ef4918c3f5b61
	  System UUID:                f2305a4e-47b6-400c-858e-f4918c3f5b61
	  Boot ID:                    4e6f970c-ffda-4afc-9055-e66a25cd3b8c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-sgkx5                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m59s
	  kube-system                 coredns-76f75df574-rb5mm                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m15s
	  kube-system                 etcd-multinode-414194                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m27s
	  kube-system                 kindnet-pd9pv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m15s
	  kube-system                 kube-apiserver-multinode-414194             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 kube-controller-manager-multinode-414194    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 kube-proxy-pkn5q                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	  kube-system                 kube-scheduler-multinode-414194             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m12s                  kube-proxy       
	  Normal   Starting                 82s                    kube-proxy       
	  Normal   Starting                 111s                   kube-proxy       
	  Normal   Starting                 8m34s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  8m34s (x8 over 8m34s)  kubelet          Node multinode-414194 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m34s (x8 over 8m34s)  kubelet          Node multinode-414194 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m34s (x7 over 8m34s)  kubelet          Node multinode-414194 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  8m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 8m28s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m27s                  kubelet          Node multinode-414194 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m27s                  kubelet          Node multinode-414194 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m27s                  kubelet          Node multinode-414194 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m16s                  node-controller  Node multinode-414194 event: Registered Node multinode-414194 in Controller
	  Normal   NodeReady                7m43s                  kubelet          Node multinode-414194 status is now: NodeReady
	  Warning  ContainerGCFailed        2m28s (x2 over 3m28s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           99s                    node-controller  Node multinode-414194 event: Registered Node multinode-414194 in Controller
	  Normal   Starting                 86s                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  86s (x8 over 86s)      kubelet          Node multinode-414194 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    86s (x8 over 86s)      kubelet          Node multinode-414194 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     86s (x7 over 86s)      kubelet          Node multinode-414194 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  86s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           69s                    node-controller  Node multinode-414194 event: Registered Node multinode-414194 in Controller
	
	
	Name:               multinode-414194-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-414194-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=multinode-414194
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T00_31_55_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 00:31:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-414194-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:32:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 00:32:24 +0000   Tue, 16 Apr 2024 00:31:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 00:32:24 +0000   Tue, 16 Apr 2024 00:31:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 00:32:24 +0000   Tue, 16 Apr 2024 00:31:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 00:32:24 +0000   Tue, 16 Apr 2024 00:32:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.81
	  Hostname:    multinode-414194-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 768cfe7bcb95478abb083e0d24846f92
	  System UUID:                768cfe7b-cb95-478a-bb08-3e0d24846f92
	  Boot ID:                    462b099d-7b24-43c9-ac3e-cf002e7a4151
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-b9fgh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kindnet-pcwvx               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m11s
	  kube-system                 kube-proxy-2qhl9            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m5s                   kube-proxy  
	  Normal  Starting                 35s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m11s (x2 over 7m11s)  kubelet     Node multinode-414194-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m11s (x2 over 7m11s)  kubelet     Node multinode-414194-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m11s (x2 over 7m11s)  kubelet     Node multinode-414194-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m11s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m1s                   kubelet     Node multinode-414194-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  40s (x2 over 40s)      kubelet     Node multinode-414194-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x2 over 40s)      kubelet     Node multinode-414194-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x2 over 40s)      kubelet     Node multinode-414194-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                31s                    kubelet     Node multinode-414194-m02 status is now: NodeReady
	
	
	Name:               multinode-414194-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-414194-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=multinode-414194
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T00_32_24_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 00:32:22 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-414194-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:32:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 00:32:30 +0000   Tue, 16 Apr 2024 00:32:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 00:32:30 +0000   Tue, 16 Apr 2024 00:32:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 00:32:30 +0000   Tue, 16 Apr 2024 00:32:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 00:32:30 +0000   Tue, 16 Apr 2024 00:32:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.64
	  Hostname:    multinode-414194-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 acefbb7be91c47e18a7cf64b4ecef0bf
	  System UUID:                acefbb7b-e91c-47e1-8a7c-f64b4ecef0bf
	  Boot ID:                    2364ca30-d184-40eb-b4f8-ce93e0c8cf8e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9vrg8       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m25s
	  kube-system                 kube-proxy-65kpd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m19s                  kube-proxy  
	  Normal  Starting                 8s                     kube-proxy  
	  Normal  Starting                 5m39s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m25s (x2 over 6m25s)  kubelet     Node multinode-414194-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m25s (x2 over 6m25s)  kubelet     Node multinode-414194-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m25s (x2 over 6m25s)  kubelet     Node multinode-414194-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m25s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m15s                  kubelet     Node multinode-414194-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m44s (x2 over 5m44s)  kubelet     Node multinode-414194-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m44s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m44s (x2 over 5m44s)  kubelet     Node multinode-414194-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m44s (x2 over 5m44s)  kubelet     Node multinode-414194-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m35s                  kubelet     Node multinode-414194-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  12s (x2 over 12s)      kubelet     Node multinode-414194-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x2 over 12s)      kubelet     Node multinode-414194-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x2 over 12s)      kubelet     Node multinode-414194-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-414194-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.183691] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.105896] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.272747] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.263584] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +4.760405] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.062292] kauditd_printk_skb: 158 callbacks suppressed
	[Apr16 00:24] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.075871] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.011628] systemd-fstab-generator[1479]: Ignoring "noauto" option for root device
	[  +0.128979] kauditd_printk_skb: 21 callbacks suppressed
	[ +32.443726] kauditd_printk_skb: 60 callbacks suppressed
	[Apr16 00:25] kauditd_printk_skb: 12 callbacks suppressed
	[Apr16 00:30] systemd-fstab-generator[2780]: Ignoring "noauto" option for root device
	[  +0.145497] systemd-fstab-generator[2792]: Ignoring "noauto" option for root device
	[  +0.201270] systemd-fstab-generator[2823]: Ignoring "noauto" option for root device
	[  +0.183819] systemd-fstab-generator[2887]: Ignoring "noauto" option for root device
	[  +0.288437] systemd-fstab-generator[2915]: Ignoring "noauto" option for root device
	[  +0.763249] systemd-fstab-generator[3024]: Ignoring "noauto" option for root device
	[  +5.050916] kauditd_printk_skb: 207 callbacks suppressed
	[Apr16 00:31] systemd-fstab-generator[4185]: Ignoring "noauto" option for root device
	[  +0.088423] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.305460] kauditd_printk_skb: 62 callbacks suppressed
	[ +12.332318] kauditd_printk_skb: 3 callbacks suppressed
	[  +2.822124] systemd-fstab-generator[4938]: Ignoring "noauto" option for root device
	[  +2.832292] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [250f4d2b523d67aa310faaa736ac218abede0483f0090eec335a62d5d91e8010] <==
	{"level":"info","ts":"2024-04-16T00:31:09.237559Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T00:31:09.237574Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T00:31:09.238192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac switched to configuration voters=(15657868212029965228)"}
	{"level":"info","ts":"2024-04-16T00:31:09.238315Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e5cf977c4e262fb4","local-member-id":"d94bec2e0ded43ac","added-peer-id":"d94bec2e0ded43ac","added-peer-peer-urls":["https://192.168.39.140:2380"]}
	{"level":"info","ts":"2024-04-16T00:31:09.238421Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e5cf977c4e262fb4","local-member-id":"d94bec2e0ded43ac","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T00:31:09.238472Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T00:31:09.257123Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-16T00:31:09.25731Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d94bec2e0ded43ac","initial-advertise-peer-urls":["https://192.168.39.140:2380"],"listen-peer-urls":["https://192.168.39.140:2380"],"advertise-client-urls":["https://192.168.39.140:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.140:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T00:31:09.257353Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T00:31:09.258887Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-04-16T00:31:09.258921Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-04-16T00:31:10.800987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac is starting a new election at term 3"}
	{"level":"info","ts":"2024-04-16T00:31:10.801078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became pre-candidate at term 3"}
	{"level":"info","ts":"2024-04-16T00:31:10.80114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac received MsgPreVoteResp from d94bec2e0ded43ac at term 3"}
	{"level":"info","ts":"2024-04-16T00:31:10.801175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became candidate at term 4"}
	{"level":"info","ts":"2024-04-16T00:31:10.801186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac received MsgVoteResp from d94bec2e0ded43ac at term 4"}
	{"level":"info","ts":"2024-04-16T00:31:10.801199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became leader at term 4"}
	{"level":"info","ts":"2024-04-16T00:31:10.801209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d94bec2e0ded43ac elected leader d94bec2e0ded43ac at term 4"}
	{"level":"info","ts":"2024-04-16T00:31:10.80835Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T00:31:10.808946Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d94bec2e0ded43ac","local-member-attributes":"{Name:multinode-414194 ClientURLs:[https://192.168.39.140:2379]}","request-path":"/0/members/d94bec2e0ded43ac/attributes","cluster-id":"e5cf977c4e262fb4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T00:31:10.809216Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T00:31:10.809492Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T00:31:10.809572Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T00:31:10.811912Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T00:31:10.811982Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.140:2379"}
	
	
	==> etcd [c97fdfc017a2bc300f5a99f131f6ef95456d6c86023e424eff41cb8b65b77feb] <==
	{"level":"info","ts":"2024-04-16T00:30:40.130655Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T00:30:41.724202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-16T00:30:41.72424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-16T00:30:41.724274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac received MsgPreVoteResp from d94bec2e0ded43ac at term 2"}
	{"level":"info","ts":"2024-04-16T00:30:41.724287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became candidate at term 3"}
	{"level":"info","ts":"2024-04-16T00:30:41.724293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac received MsgVoteResp from d94bec2e0ded43ac at term 3"}
	{"level":"info","ts":"2024-04-16T00:30:41.724315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became leader at term 3"}
	{"level":"info","ts":"2024-04-16T00:30:41.724326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d94bec2e0ded43ac elected leader d94bec2e0ded43ac at term 3"}
	{"level":"info","ts":"2024-04-16T00:30:41.727611Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T00:30:41.727556Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d94bec2e0ded43ac","local-member-attributes":"{Name:multinode-414194 ClientURLs:[https://192.168.39.140:2379]}","request-path":"/0/members/d94bec2e0ded43ac/attributes","cluster-id":"e5cf977c4e262fb4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T00:30:41.729039Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T00:30:41.729263Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T00:30:41.729276Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T00:30:41.72962Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.140:2379"}
	{"level":"info","ts":"2024-04-16T00:30:41.731444Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T00:31:05.975647Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-16T00:31:05.975721Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-414194","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.140:2380"],"advertise-client-urls":["https://192.168.39.140:2379"]}
	{"level":"warn","ts":"2024-04-16T00:31:05.975863Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.140:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T00:31:05.975904Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.140:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T00:31:05.975997Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T00:31:05.976005Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-16T00:31:05.977632Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d94bec2e0ded43ac","current-leader-member-id":"d94bec2e0ded43ac"}
	{"level":"info","ts":"2024-04-16T00:31:05.981165Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-04-16T00:31:05.98134Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-04-16T00:31:05.981354Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-414194","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.140:2380"],"advertise-client-urls":["https://192.168.39.140:2379"]}
	
	
	==> kernel <==
	 00:32:34 up 9 min,  0 users,  load average: 0.40, 0.26, 0.14
	Linux multinode-414194 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [08ecdae904548f8af39104f23afa280987370e7c6de4405146fe74d4adc8ea2e] <==
	I0416 00:31:54.363284       1 main.go:250] Node multinode-414194-m03 has CIDR [10.244.3.0/24] 
	I0416 00:32:04.368322       1 main.go:223] Handling node with IPs: map[192.168.39.140:{}]
	I0416 00:32:04.368368       1 main.go:227] handling current node
	I0416 00:32:04.368379       1 main.go:223] Handling node with IPs: map[192.168.39.81:{}]
	I0416 00:32:04.368385       1 main.go:250] Node multinode-414194-m02 has CIDR [10.244.1.0/24] 
	I0416 00:32:04.368530       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0416 00:32:04.368559       1 main.go:250] Node multinode-414194-m03 has CIDR [10.244.3.0/24] 
	I0416 00:32:14.373335       1 main.go:223] Handling node with IPs: map[192.168.39.140:{}]
	I0416 00:32:14.373706       1 main.go:227] handling current node
	I0416 00:32:14.373764       1 main.go:223] Handling node with IPs: map[192.168.39.81:{}]
	I0416 00:32:14.373845       1 main.go:250] Node multinode-414194-m02 has CIDR [10.244.1.0/24] 
	I0416 00:32:14.373981       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0416 00:32:14.374004       1 main.go:250] Node multinode-414194-m03 has CIDR [10.244.3.0/24] 
	I0416 00:32:24.387095       1 main.go:223] Handling node with IPs: map[192.168.39.140:{}]
	I0416 00:32:24.387214       1 main.go:227] handling current node
	I0416 00:32:24.387247       1 main.go:223] Handling node with IPs: map[192.168.39.81:{}]
	I0416 00:32:24.387334       1 main.go:250] Node multinode-414194-m02 has CIDR [10.244.1.0/24] 
	I0416 00:32:24.387497       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0416 00:32:24.387551       1 main.go:250] Node multinode-414194-m03 has CIDR [10.244.2.0/24] 
	I0416 00:32:34.401008       1 main.go:223] Handling node with IPs: map[192.168.39.140:{}]
	I0416 00:32:34.401033       1 main.go:227] handling current node
	I0416 00:32:34.401043       1 main.go:223] Handling node with IPs: map[192.168.39.81:{}]
	I0416 00:32:34.401048       1 main.go:250] Node multinode-414194-m02 has CIDR [10.244.1.0/24] 
	I0416 00:32:34.401212       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0416 00:32:34.401220       1 main.go:250] Node multinode-414194-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [cb28a7e68259cdb451f72bd51f7d647ab66e3b85ce31278e929148b4158bd23e] <==
	I0416 00:30:39.899271       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0416 00:30:39.899327       1 main.go:107] hostIP = 192.168.39.140
	podIP = 192.168.39.140
	I0416 00:30:39.899449       1 main.go:116] setting mtu 1500 for CNI 
	I0416 00:30:39.899465       1 main.go:146] kindnetd IP family: "ipv4"
	I0416 00:30:39.899480       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0416 00:30:43.246276       1 main.go:223] Handling node with IPs: map[192.168.39.140:{}]
	I0416 00:30:43.246328       1 main.go:227] handling current node
	I0416 00:30:43.248229       1 main.go:223] Handling node with IPs: map[192.168.39.81:{}]
	I0416 00:30:43.248337       1 main.go:250] Node multinode-414194-m02 has CIDR [10.244.1.0/24] 
	I0416 00:30:43.292713       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0416 00:30:43.292777       1 main.go:250] Node multinode-414194-m03 has CIDR [10.244.3.0/24] 
	I0416 00:30:53.300472       1 main.go:223] Handling node with IPs: map[192.168.39.140:{}]
	I0416 00:30:53.300562       1 main.go:227] handling current node
	I0416 00:30:53.300589       1 main.go:223] Handling node with IPs: map[192.168.39.81:{}]
	I0416 00:30:53.300607       1 main.go:250] Node multinode-414194-m02 has CIDR [10.244.1.0/24] 
	I0416 00:30:53.300734       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0416 00:30:53.300759       1 main.go:250] Node multinode-414194-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [6c2cb5115dbfa806690a4c862b1d19a3e52fd71b6eb9e12beb5fb9dc63eaeb79] <==
	I0416 00:30:55.776032       1 controller.go:115] Shutting down OpenAPI V3 controller
	I0416 00:30:55.776062       1 controller.go:161] Shutting down OpenAPI controller
	I0416 00:30:55.776071       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0416 00:30:55.776085       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0416 00:30:55.776127       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0416 00:30:55.776140       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0416 00:30:55.776155       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0416 00:30:55.776164       1 naming_controller.go:302] Shutting down NamingConditionController
	I0416 00:30:55.776183       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0416 00:30:55.776621       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0416 00:30:55.776718       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0416 00:30:55.776901       1 controller.go:159] Shutting down quota evaluator
	I0416 00:30:55.776993       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0416 00:30:55.777062       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0416 00:30:55.777099       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0416 00:30:55.777163       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0416 00:30:55.777199       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0416 00:30:55.777240       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0416 00:30:55.777654       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0416 00:30:55.777866       1 controller.go:178] quota evaluator worker shutdown
	I0416 00:30:55.777903       1 controller.go:178] quota evaluator worker shutdown
	I0416 00:30:55.777929       1 controller.go:178] quota evaluator worker shutdown
	I0416 00:30:55.777953       1 controller.go:178] quota evaluator worker shutdown
	I0416 00:30:55.779314       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0416 00:30:55.777003       1 controller.go:178] quota evaluator worker shutdown
	
	
	==> kube-apiserver [f865fc38c7e05fae5207cd27288c53bd9b1f203382b0f691dc6e05c6b7b3ab17] <==
	I0416 00:31:12.073111       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0416 00:31:12.073459       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0416 00:31:12.073494       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0416 00:31:12.164318       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 00:31:12.171000       1 shared_informer.go:318] Caches are synced for configmaps
	I0416 00:31:12.174218       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 00:31:12.178435       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0416 00:31:12.181128       1 aggregator.go:165] initial CRD sync complete...
	I0416 00:31:12.181184       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 00:31:12.181208       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 00:31:12.181231       1 cache.go:39] Caches are synced for autoregister controller
	I0416 00:31:12.192989       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0416 00:31:12.193036       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0416 00:31:12.193134       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 00:31:12.211046       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0416 00:31:12.221329       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 00:31:13.067751       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0416 00:31:13.310201       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.140]
	I0416 00:31:13.311564       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 00:31:13.317507       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 00:31:14.158702       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 00:31:14.314513       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 00:31:14.338828       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 00:31:14.416096       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 00:31:14.423614       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [0eb348f36dc509e79515a270e877a7510e598d118a29cb47ef41895788b779c8] <==
	I0416 00:31:49.923016       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="27.222666ms"
	I0416 00:31:49.930070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="6.664981ms"
	I0416 00:31:49.949185       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="18.999415ms"
	I0416 00:31:49.949528       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="99.308µs"
	I0416 00:31:54.278287       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-414194-m02\" does not exist"
	I0416 00:31:54.279323       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ms6xm" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-ms6xm"
	I0416 00:31:54.295694       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-414194-m02" podCIDRs=["10.244.1.0/24"]
	I0416 00:31:56.168701       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="58.838µs"
	I0416 00:31:56.210837       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="60.138µs"
	I0416 00:31:56.223004       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="65.371µs"
	I0416 00:31:56.243854       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="88.547µs"
	I0416 00:31:56.252887       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="1.00231ms"
	I0416 00:31:56.265536       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="69.746µs"
	I0416 00:31:56.266511       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ms6xm" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-ms6xm"
	I0416 00:32:03.371184       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-414194-m02"
	I0416 00:32:03.394042       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="45.608µs"
	I0416 00:32:03.410891       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="95.575µs"
	I0416 00:32:05.412104       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-b9fgh" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-b9fgh"
	I0416 00:32:06.753655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="5.700336ms"
	I0416 00:32:06.754035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="49.964µs"
	I0416 00:32:21.616561       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-414194-m02"
	I0416 00:32:22.629653       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-414194-m03\" does not exist"
	I0416 00:32:22.631012       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-414194-m02"
	I0416 00:32:22.651258       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-414194-m03" podCIDRs=["10.244.2.0/24"]
	I0416 00:32:31.002639       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-414194-m02"
	
	
	==> kube-controller-manager [5f7dc8a7b1688773e7beca7f7620ad25f1d7b0d0535e159594000283c4a92837] <==
	I0416 00:30:55.421473       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-414194"
	I0416 00:30:55.421609       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-414194-m03"
	I0416 00:30:55.421710       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-414194-m02"
	I0416 00:30:55.421834       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0416 00:30:55.440282       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="37.650585ms"
	I0416 00:30:55.440404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="42.226µs"
	I0416 00:30:55.559198       1 shared_informer.go:318] Caches are synced for resource quota
	I0416 00:30:55.564402       1 shared_informer.go:318] Caches are synced for resource quota
	W0416 00:30:55.809068       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PriorityLevelConfiguration: Get "https://192.168.39.140:8443/apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	E0416 00:30:55.809192       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PriorityLevelConfiguration: failed to list *v1.PriorityLevelConfiguration: Get "https://192.168.39.140:8443/apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	W0416 00:30:55.859448       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ClusterRoleBinding: Get "https://192.168.39.140:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	E0416 00:30:55.859593       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ClusterRoleBinding: failed to list *v1.ClusterRoleBinding: Get "https://192.168.39.140:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	W0416 00:30:56.761159       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ClusterRoleBinding: Get "https://192.168.39.140:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	E0416 00:30:56.761198       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ClusterRoleBinding: failed to list *v1.ClusterRoleBinding: Get "https://192.168.39.140:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	W0416 00:30:57.143917       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PriorityLevelConfiguration: Get "https://192.168.39.140:8443/apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	E0416 00:30:57.143970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PriorityLevelConfiguration: failed to list *v1.PriorityLevelConfiguration: Get "https://192.168.39.140:8443/apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	W0416 00:30:58.622528       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ClusterRoleBinding: Get "https://192.168.39.140:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	E0416 00:30:58.622720       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ClusterRoleBinding: failed to list *v1.ClusterRoleBinding: Get "https://192.168.39.140:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	W0416 00:30:59.418769       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PriorityLevelConfiguration: Get "https://192.168.39.140:8443/apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	E0416 00:30:59.418872       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PriorityLevelConfiguration: failed to list *v1.PriorityLevelConfiguration: Get "https://192.168.39.140:8443/apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	W0416 00:31:04.054022       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ClusterRoleBinding: Get "https://192.168.39.140:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	E0416 00:31:04.054078       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ClusterRoleBinding: failed to list *v1.ClusterRoleBinding: Get "https://192.168.39.140:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	E0416 00:31:05.527022       1 controller_utils.go:203] unable to taint [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2024-04-16 00:31:05.526432719 +0000 UTC m=+26.144827065,}] unresponsive Node "multinode-414194-m02": Get "https://192.168.39.140:8443/api/v1/nodes/multinode-414194-m02?resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	W0416 00:31:05.662066       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PriorityLevelConfiguration: Get "https://192.168.39.140:8443/apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	E0416 00:31:05.662176       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PriorityLevelConfiguration: failed to list *v1.PriorityLevelConfiguration: Get "https://192.168.39.140:8443/apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	
	
	==> kube-proxy [1199477a5e1bde603f9196d87e5b3e814fd696a077091ca14528388a47c54a86] <==
	I0416 00:30:41.006709       1 server_others.go:72] "Using iptables proxy"
	I0416 00:30:43.258460       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.140"]
	I0416 00:30:43.333263       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 00:30:43.333334       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 00:30:43.333364       1 server_others.go:168] "Using iptables Proxier"
	I0416 00:30:43.336334       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 00:30:43.336549       1 server.go:865] "Version info" version="v1.29.3"
	I0416 00:30:43.336724       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 00:30:43.338148       1 config.go:188] "Starting service config controller"
	I0416 00:30:43.338215       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 00:30:43.338258       1 config.go:97] "Starting endpoint slice config controller"
	I0416 00:30:43.338275       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 00:30:43.339692       1 config.go:315] "Starting node config controller"
	I0416 00:30:43.340366       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 00:30:43.439370       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 00:30:43.439481       1 shared_informer.go:318] Caches are synced for service config
	I0416 00:30:43.441864       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [c0ff4e26cfc61b8d5b0260e9eb83770cbc91236f8244f5ae0b56390d752d1241] <==
	I0416 00:31:03.057232       1 server_others.go:72] "Using iptables proxy"
	E0416 00:31:03.059289       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/multinode-414194\": dial tcp 192.168.39.140:8443: connect: connection refused"
	E0416 00:31:04.238139       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/multinode-414194\": dial tcp 192.168.39.140:8443: connect: connection refused"
	E0416 00:31:06.479622       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/multinode-414194\": dial tcp 192.168.39.140:8443: connect: connection refused"
	I0416 00:31:12.244637       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.140"]
	I0416 00:31:12.349463       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 00:31:12.349707       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 00:31:12.349997       1 server_others.go:168] "Using iptables Proxier"
	I0416 00:31:12.360111       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 00:31:12.360405       1 server.go:865] "Version info" version="v1.29.3"
	I0416 00:31:12.360441       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 00:31:12.364880       1 config.go:188] "Starting service config controller"
	I0416 00:31:12.365000       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 00:31:12.365137       1 config.go:97] "Starting endpoint slice config controller"
	I0416 00:31:12.365236       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 00:31:12.367648       1 config.go:315] "Starting node config controller"
	I0416 00:31:12.367678       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 00:31:12.466304       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 00:31:12.466425       1 shared_informer.go:318] Caches are synced for service config
	I0416 00:31:12.468559       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [707d02997c5056b67d64184224cc9001aa6230ed5898761df3a5558f6da469b1] <==
	W0416 00:30:43.203329       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 00:30:43.203360       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 00:30:43.203414       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 00:30:43.203422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 00:30:43.203465       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 00:30:43.203495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0416 00:30:43.203546       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 00:30:43.203576       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 00:30:43.203639       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 00:30:43.203667       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 00:30:43.203706       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 00:30:43.203736       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 00:30:43.204558       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 00:30:43.204658       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 00:30:43.204695       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0416 00:30:43.204706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0416 00:30:43.205588       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0416 00:30:43.205692       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0416 00:30:43.210642       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0416 00:30:43.210771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	I0416 00:30:44.181482       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 00:31:05.837534       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0416 00:31:05.837660       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0416 00:31:05.837887       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0416 00:31:05.838020       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c8c4663335b279485a3ebb281532e482a104e080a9284d76a648c00269233cbb] <==
	I0416 00:31:09.970353       1 serving.go:380] Generated self-signed cert in-memory
	W0416 00:31:12.112481       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0416 00:31:12.112535       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 00:31:12.112553       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0416 00:31:12.112560       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0416 00:31:12.235548       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0416 00:31:12.235635       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 00:31:12.269130       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0416 00:31:12.269242       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0416 00:31:12.287420       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0416 00:31:12.288903       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 00:31:12.390319       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 00:31:12 multinode-414194 kubelet[4192]: I0416 00:31:12.224029    4192 topology_manager.go:215] "Topology Admit Handler" podUID="00b5fef9-7a2b-4e54-bda6-b721112d5496" podNamespace="default" podName="busybox-7fdf7869d9-sgkx5"
	Apr 16 00:31:12 multinode-414194 kubelet[4192]: I0416 00:31:12.306670    4192 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Apr 16 00:31:12 multinode-414194 kubelet[4192]: I0416 00:31:12.330556    4192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7877a83-35ca-4241-a233-283ab4a3e4ae-lib-modules\") pod \"kube-proxy-pkn5q\" (UID: \"c7877a83-35ca-4241-a233-283ab4a3e4ae\") " pod="kube-system/kube-proxy-pkn5q"
	Apr 16 00:31:12 multinode-414194 kubelet[4192]: I0416 00:31:12.331146    4192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcdbff0d-43d4-45f6-81e8-cbe13209d1a6-xtables-lock\") pod \"kindnet-pd9pv\" (UID: \"bcdbff0d-43d4-45f6-81e8-cbe13209d1a6\") " pod="kube-system/kindnet-pd9pv"
	Apr 16 00:31:12 multinode-414194 kubelet[4192]: I0416 00:31:12.331363    4192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcdbff0d-43d4-45f6-81e8-cbe13209d1a6-lib-modules\") pod \"kindnet-pd9pv\" (UID: \"bcdbff0d-43d4-45f6-81e8-cbe13209d1a6\") " pod="kube-system/kindnet-pd9pv"
	Apr 16 00:31:12 multinode-414194 kubelet[4192]: I0416 00:31:12.331498    4192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/85c46886-a03a-43cd-a9bd-3ce8ea51f3ed-tmp\") pod \"storage-provisioner\" (UID: \"85c46886-a03a-43cd-a9bd-3ce8ea51f3ed\") " pod="kube-system/storage-provisioner"
	Apr 16 00:31:12 multinode-414194 kubelet[4192]: I0416 00:31:12.331745    4192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7877a83-35ca-4241-a233-283ab4a3e4ae-xtables-lock\") pod \"kube-proxy-pkn5q\" (UID: \"c7877a83-35ca-4241-a233-283ab4a3e4ae\") " pod="kube-system/kube-proxy-pkn5q"
	Apr 16 00:31:12 multinode-414194 kubelet[4192]: I0416 00:31:12.331927    4192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bcdbff0d-43d4-45f6-81e8-cbe13209d1a6-cni-cfg\") pod \"kindnet-pd9pv\" (UID: \"bcdbff0d-43d4-45f6-81e8-cbe13209d1a6\") " pod="kube-system/kindnet-pd9pv"
	Apr 16 00:31:12 multinode-414194 kubelet[4192]: I0416 00:31:12.339962    4192 kubelet_node_status.go:112] "Node was previously registered" node="multinode-414194"
	Apr 16 00:31:12 multinode-414194 kubelet[4192]: I0416 00:31:12.340241    4192 kubelet_node_status.go:76] "Successfully registered node" node="multinode-414194"
	Apr 16 00:31:12 multinode-414194 kubelet[4192]: I0416 00:31:12.345570    4192 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 16 00:31:12 multinode-414194 kubelet[4192]: I0416 00:31:12.347330    4192 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 16 00:31:12 multinode-414194 kubelet[4192]: I0416 00:31:12.525687    4192 scope.go:117] "RemoveContainer" containerID="cb28a7e68259cdb451f72bd51f7d647ab66e3b85ce31278e929148b4158bd23e"
	Apr 16 00:31:12 multinode-414194 kubelet[4192]: I0416 00:31:12.528337    4192 scope.go:117] "RemoveContainer" containerID="8e5681569ff0fbb7477ce22d73f2e2d7dd508535650d6e8e73008d8fbe334e0f"
	Apr 16 00:31:12 multinode-414194 kubelet[4192]: I0416 00:31:12.538506    4192 scope.go:117] "RemoveContainer" containerID="0f17e92a363098fdeb203b5b88690b26d20ee0163b8c3b5931987e38331d6042"
	Apr 16 00:31:17 multinode-414194 kubelet[4192]: I0416 00:31:17.144392    4192 scope.go:117] "RemoveContainer" containerID="8e5681569ff0fbb7477ce22d73f2e2d7dd508535650d6e8e73008d8fbe334e0f"
	Apr 16 00:31:17 multinode-414194 kubelet[4192]: I0416 00:31:17.144703    4192 scope.go:117] "RemoveContainer" containerID="d245241ef5b83c185c6d37a3b77ec510f66d9cbfc8b3ee28ab93535d56219874"
	Apr 16 00:31:17 multinode-414194 kubelet[4192]: E0416 00:31:17.144973    4192 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(85c46886-a03a-43cd-a9bd-3ce8ea51f3ed)\"" pod="kube-system/storage-provisioner" podUID="85c46886-a03a-43cd-a9bd-3ce8ea51f3ed"
	Apr 16 00:31:31 multinode-414194 kubelet[4192]: I0416 00:31:31.249461    4192 scope.go:117] "RemoveContainer" containerID="d245241ef5b83c185c6d37a3b77ec510f66d9cbfc8b3ee28ab93535d56219874"
	Apr 16 00:32:08 multinode-414194 kubelet[4192]: E0416 00:32:08.277837    4192 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 00:32:08 multinode-414194 kubelet[4192]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 00:32:08 multinode-414194 kubelet[4192]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 00:32:08 multinode-414194 kubelet[4192]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 00:32:08 multinode-414194 kubelet[4192]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 00:32:08 multinode-414194 kubelet[4192]: E0416 00:32:08.374155    4192 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod00b5fef9-7a2b-4e54-bda6-b721112d5496/crio-9384fbc0adcf7e21b22fd218db6be959af37dca2db42c105f595c0147322d1d3: Error finding container 9384fbc0adcf7e21b22fd218db6be959af37dca2db42c105f595c0147322d1d3: Status 404 returned error can't find the container with id 9384fbc0adcf7e21b22fd218db6be959af37dca2db42c105f595c0147322d1d3
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:32:33.676694   45197 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18647-7542/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-414194 -n multinode-414194
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-414194 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (333.84s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 stop
E0416 00:33:58.680273   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-414194 stop: exit status 82 (2m0.476314052s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-414194-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-414194 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-414194 status: exit status 3 (18.890245917s)

                                                
                                                
-- stdout --
	multinode-414194
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-414194-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:34:57.497508   45888 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.81:22: connect: no route to host
	E0416 00:34:57.497542   45888 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.81:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-414194 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-414194 -n multinode-414194
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-414194 logs -n 25: (1.567851519s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-414194 ssh -n                                                                 | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-414194 cp multinode-414194-m02:/home/docker/cp-test.txt                       | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194:/home/docker/cp-test_multinode-414194-m02_multinode-414194.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n                                                                 | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n multinode-414194 sudo cat                                       | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | /home/docker/cp-test_multinode-414194-m02_multinode-414194.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-414194 cp multinode-414194-m02:/home/docker/cp-test.txt                       | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m03:/home/docker/cp-test_multinode-414194-m02_multinode-414194-m03.txt |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n                                                                 | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m02 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n multinode-414194-m03 sudo cat                                   | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | /home/docker/cp-test_multinode-414194-m02_multinode-414194-m03.txt                      |                  |         |                |                     |                     |
	| cp      | multinode-414194 cp testdata/cp-test.txt                                                | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m03:/home/docker/cp-test.txt                                           |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n                                                                 | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-414194 cp multinode-414194-m03:/home/docker/cp-test.txt                       | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1982584427/001/cp-test_multinode-414194-m03.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n                                                                 | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| cp      | multinode-414194 cp multinode-414194-m03:/home/docker/cp-test.txt                       | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194:/home/docker/cp-test_multinode-414194-m03_multinode-414194.txt         |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n                                                                 | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n multinode-414194 sudo cat                                       | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | /home/docker/cp-test_multinode-414194-m03_multinode-414194.txt                          |                  |         |                |                     |                     |
	| cp      | multinode-414194 cp multinode-414194-m03:/home/docker/cp-test.txt                       | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m02:/home/docker/cp-test_multinode-414194-m03_multinode-414194-m02.txt |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n                                                                 | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m03 sudo cat                                                           |                  |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n multinode-414194-m02 sudo cat                                   | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | /home/docker/cp-test_multinode-414194-m03_multinode-414194-m02.txt                      |                  |         |                |                     |                     |
	| node    | multinode-414194 node stop m03                                                          | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	| node    | multinode-414194 node start                                                             | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:27 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |                |                     |                     |
	| node    | list -p multinode-414194                                                                | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:27 UTC |                     |
	| stop    | -p multinode-414194                                                                     | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:27 UTC |                     |
	| start   | -p multinode-414194                                                                     | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:29 UTC | 16 Apr 24 00:32 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |                |                     |                     |
	| node    | list -p multinode-414194                                                                | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:32 UTC |                     |
	| node    | multinode-414194 node delete                                                            | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:32 UTC | 16 Apr 24 00:32 UTC |
	|         | m03                                                                                     |                  |         |                |                     |                     |
	| stop    | multinode-414194 stop                                                                   | multinode-414194 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:32 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 00:29:04
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 00:29:04.611777   44065 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:29:04.612028   44065 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:29:04.612037   44065 out.go:304] Setting ErrFile to fd 2...
	I0416 00:29:04.612040   44065 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:29:04.612193   44065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:29:04.612691   44065 out.go:298] Setting JSON to false
	I0416 00:29:04.613591   44065 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4289,"bootTime":1713223056,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 00:29:04.613651   44065 start.go:139] virtualization: kvm guest
	I0416 00:29:04.615948   44065 out.go:177] * [multinode-414194] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 00:29:04.617130   44065 notify.go:220] Checking for updates...
	I0416 00:29:04.617143   44065 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 00:29:04.618373   44065 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 00:29:04.619680   44065 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 00:29:04.620944   44065 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:29:04.622283   44065 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 00:29:04.623752   44065 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 00:29:04.625626   44065 config.go:182] Loaded profile config "multinode-414194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:29:04.625785   44065 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 00:29:04.626414   44065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:29:04.626468   44065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:29:04.641083   44065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34345
	I0416 00:29:04.641599   44065 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:29:04.642240   44065 main.go:141] libmachine: Using API Version  1
	I0416 00:29:04.642269   44065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:29:04.642573   44065 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:29:04.642739   44065 main.go:141] libmachine: (multinode-414194) Calling .DriverName
	I0416 00:29:04.678275   44065 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 00:29:04.679632   44065 start.go:297] selected driver: kvm2
	I0416 00:29:04.679642   44065 start.go:901] validating driver "kvm2" against &{Name:multinode-414194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:multinode-414194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.81 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.64 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:29:04.679788   44065 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 00:29:04.680081   44065 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:29:04.680145   44065 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 00:29:04.694722   44065 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 00:29:04.695357   44065 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 00:29:04.695441   44065 cni.go:84] Creating CNI manager for ""
	I0416 00:29:04.695453   44065 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0416 00:29:04.695503   44065 start.go:340] cluster config:
	{Name:multinode-414194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-414194 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.81 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.64 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:29:04.695621   44065 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:29:04.697334   44065 out.go:177] * Starting "multinode-414194" primary control-plane node in "multinode-414194" cluster
	I0416 00:29:04.698500   44065 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 00:29:04.698532   44065 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 00:29:04.698553   44065 cache.go:56] Caching tarball of preloaded images
	I0416 00:29:04.698637   44065 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 00:29:04.698654   44065 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 00:29:04.698802   44065 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/config.json ...
	I0416 00:29:04.699032   44065 start.go:360] acquireMachinesLock for multinode-414194: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:29:04.699093   44065 start.go:364] duration metric: took 41.155µs to acquireMachinesLock for "multinode-414194"
	I0416 00:29:04.699113   44065 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:29:04.699128   44065 fix.go:54] fixHost starting: 
	I0416 00:29:04.699522   44065 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:29:04.699568   44065 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:29:04.713406   44065 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44193
	I0416 00:29:04.713897   44065 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:29:04.714349   44065 main.go:141] libmachine: Using API Version  1
	I0416 00:29:04.714373   44065 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:29:04.714715   44065 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:29:04.714903   44065 main.go:141] libmachine: (multinode-414194) Calling .DriverName
	I0416 00:29:04.715086   44065 main.go:141] libmachine: (multinode-414194) Calling .GetState
	I0416 00:29:04.716669   44065 fix.go:112] recreateIfNeeded on multinode-414194: state=Running err=<nil>
	W0416 00:29:04.716683   44065 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:29:04.718656   44065 out.go:177] * Updating the running kvm2 "multinode-414194" VM ...
	I0416 00:29:04.720105   44065 machine.go:94] provisionDockerMachine start ...
	I0416 00:29:04.720128   44065 main.go:141] libmachine: (multinode-414194) Calling .DriverName
	I0416 00:29:04.720333   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:29:04.722818   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:04.723230   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:29:04.723258   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:04.723406   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHPort
	I0416 00:29:04.723585   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:29:04.723724   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:29:04.723840   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHUsername
	I0416 00:29:04.724008   44065 main.go:141] libmachine: Using SSH client type: native
	I0416 00:29:04.724187   44065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0416 00:29:04.724199   44065 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:29:04.842386   44065 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-414194
	
	I0416 00:29:04.842418   44065 main.go:141] libmachine: (multinode-414194) Calling .GetMachineName
	I0416 00:29:04.842684   44065 buildroot.go:166] provisioning hostname "multinode-414194"
	I0416 00:29:04.842706   44065 main.go:141] libmachine: (multinode-414194) Calling .GetMachineName
	I0416 00:29:04.842888   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:29:04.845345   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:04.845769   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:29:04.845809   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:04.845931   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHPort
	I0416 00:29:04.846092   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:29:04.846238   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:29:04.846360   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHUsername
	I0416 00:29:04.846497   44065 main.go:141] libmachine: Using SSH client type: native
	I0416 00:29:04.846708   44065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0416 00:29:04.846731   44065 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-414194 && echo "multinode-414194" | sudo tee /etc/hostname
	I0416 00:29:04.979172   44065 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-414194
	
	I0416 00:29:04.979206   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:29:04.982200   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:04.982586   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:29:04.982618   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:04.982847   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHPort
	I0416 00:29:04.983032   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:29:04.983201   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:29:04.983322   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHUsername
	I0416 00:29:04.983496   44065 main.go:141] libmachine: Using SSH client type: native
	I0416 00:29:04.983710   44065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0416 00:29:04.983733   44065 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-414194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-414194/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-414194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:29:05.102097   44065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:29:05.102128   44065 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:29:05.102183   44065 buildroot.go:174] setting up certificates
	I0416 00:29:05.102196   44065 provision.go:84] configureAuth start
	I0416 00:29:05.102209   44065 main.go:141] libmachine: (multinode-414194) Calling .GetMachineName
	I0416 00:29:05.102498   44065 main.go:141] libmachine: (multinode-414194) Calling .GetIP
	I0416 00:29:05.105308   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:05.105684   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:29:05.105710   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:05.105854   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:29:05.108148   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:05.108548   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:29:05.108578   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:05.108646   44065 provision.go:143] copyHostCerts
	I0416 00:29:05.108680   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:29:05.108719   44065 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:29:05.108739   44065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:29:05.108833   44065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:29:05.108931   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:29:05.108950   44065 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:29:05.108954   44065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:29:05.108983   44065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:29:05.109037   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:29:05.109052   44065 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:29:05.109059   44065 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:29:05.109079   44065 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:29:05.109134   44065 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.multinode-414194 san=[127.0.0.1 192.168.39.140 localhost minikube multinode-414194]
	I0416 00:29:05.233267   44065 provision.go:177] copyRemoteCerts
	I0416 00:29:05.233325   44065 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:29:05.233359   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:29:05.236053   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:05.236396   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:29:05.236422   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:05.236653   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHPort
	I0416 00:29:05.236821   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:29:05.236979   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHUsername
	I0416 00:29:05.237128   44065 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/multinode-414194/id_rsa Username:docker}
	I0416 00:29:05.331778   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0416 00:29:05.331838   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:29:05.359491   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0416 00:29:05.359555   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0416 00:29:05.386268   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0416 00:29:05.386335   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 00:29:05.413265   44065 provision.go:87] duration metric: took 311.057324ms to configureAuth
	I0416 00:29:05.413292   44065 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:29:05.413504   44065 config.go:182] Loaded profile config "multinode-414194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:29:05.413578   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:29:05.416287   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:05.416649   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:29:05.416679   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:29:05.416878   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHPort
	I0416 00:29:05.417070   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:29:05.417267   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:29:05.417472   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHUsername
	I0416 00:29:05.417730   44065 main.go:141] libmachine: Using SSH client type: native
	I0416 00:29:05.417901   44065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0416 00:29:05.417917   44065 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:30:36.238081   44065 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:30:36.238109   44065 machine.go:97] duration metric: took 1m31.517984271s to provisionDockerMachine
	I0416 00:30:36.238131   44065 start.go:293] postStartSetup for "multinode-414194" (driver="kvm2")
	I0416 00:30:36.238182   44065 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:30:36.238209   44065 main.go:141] libmachine: (multinode-414194) Calling .DriverName
	I0416 00:30:36.238555   44065 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:30:36.238585   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:30:36.242029   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.242631   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:30:36.242656   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.242878   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHPort
	I0416 00:30:36.243042   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:30:36.243238   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHUsername
	I0416 00:30:36.243372   44065 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/multinode-414194/id_rsa Username:docker}
	I0416 00:30:36.333914   44065 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:30:36.338417   44065 command_runner.go:130] > NAME=Buildroot
	I0416 00:30:36.338442   44065 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0416 00:30:36.338447   44065 command_runner.go:130] > ID=buildroot
	I0416 00:30:36.338455   44065 command_runner.go:130] > VERSION_ID=2023.02.9
	I0416 00:30:36.338462   44065 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0416 00:30:36.338566   44065 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:30:36.338589   44065 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:30:36.338654   44065 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:30:36.338742   44065 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:30:36.338752   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /etc/ssl/certs/148972.pem
	I0416 00:30:36.338875   44065 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:30:36.349499   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:30:36.374108   44065 start.go:296] duration metric: took 135.964233ms for postStartSetup
	I0416 00:30:36.374149   44065 fix.go:56] duration metric: took 1m31.675027259s for fixHost
	I0416 00:30:36.374171   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:30:36.376661   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.377067   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:30:36.377113   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.377231   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHPort
	I0416 00:30:36.377443   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:30:36.377603   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:30:36.377736   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHUsername
	I0416 00:30:36.377904   44065 main.go:141] libmachine: Using SSH client type: native
	I0416 00:30:36.378107   44065 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.140 22 <nil> <nil>}
	I0416 00:30:36.378123   44065 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 00:30:36.494421   44065 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713227436.475672000
	
	I0416 00:30:36.494443   44065 fix.go:216] guest clock: 1713227436.475672000
	I0416 00:30:36.494449   44065 fix.go:229] Guest: 2024-04-16 00:30:36.475672 +0000 UTC Remote: 2024-04-16 00:30:36.37415442 +0000 UTC m=+91.808381284 (delta=101.51758ms)
	I0416 00:30:36.494465   44065 fix.go:200] guest clock delta is within tolerance: 101.51758ms
	I0416 00:30:36.494470   44065 start.go:83] releasing machines lock for "multinode-414194", held for 1m31.795365085s
	I0416 00:30:36.494486   44065 main.go:141] libmachine: (multinode-414194) Calling .DriverName
	I0416 00:30:36.494732   44065 main.go:141] libmachine: (multinode-414194) Calling .GetIP
	I0416 00:30:36.497442   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.497789   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:30:36.497819   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.497932   44065 main.go:141] libmachine: (multinode-414194) Calling .DriverName
	I0416 00:30:36.498427   44065 main.go:141] libmachine: (multinode-414194) Calling .DriverName
	I0416 00:30:36.498569   44065 main.go:141] libmachine: (multinode-414194) Calling .DriverName
	I0416 00:30:36.498655   44065 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:30:36.498702   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:30:36.498757   44065 ssh_runner.go:195] Run: cat /version.json
	I0416 00:30:36.498776   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:30:36.501183   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.501317   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.501568   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:30:36.501595   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.501690   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:30:36.501698   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHPort
	I0416 00:30:36.501713   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:36.501891   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:30:36.501910   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHPort
	I0416 00:30:36.502066   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHUsername
	I0416 00:30:36.502069   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:30:36.502265   44065 main.go:141] libmachine: (multinode-414194) Calling .GetSSHUsername
	I0416 00:30:36.502261   44065 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/multinode-414194/id_rsa Username:docker}
	I0416 00:30:36.502401   44065 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/multinode-414194/id_rsa Username:docker}
	I0416 00:30:36.616889   44065 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0416 00:30:36.616966   44065 command_runner.go:130] > {"iso_version": "v1.33.0-1713175573-18634", "kicbase_version": "v0.0.43-1712854342-18621", "minikube_version": "v1.33.0-beta.0", "commit": "0ece0b4c602cbaab0821f0ba2d6ec4a07a392655"}
	I0416 00:30:36.617070   44065 ssh_runner.go:195] Run: systemctl --version
	I0416 00:30:36.623181   44065 command_runner.go:130] > systemd 252 (252)
	I0416 00:30:36.623209   44065 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0416 00:30:36.623479   44065 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:30:36.783597   44065 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0416 00:30:36.792651   44065 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0416 00:30:36.792741   44065 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:30:36.792799   44065 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:30:36.802740   44065 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0416 00:30:36.802761   44065 start.go:494] detecting cgroup driver to use...
	I0416 00:30:36.802830   44065 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:30:36.820329   44065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:30:36.835812   44065 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:30:36.835864   44065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:30:36.851450   44065 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:30:36.866490   44065 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:30:37.009540   44065 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 00:30:37.153556   44065 docker.go:233] disabling docker service ...
	I0416 00:30:37.153614   44065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 00:30:37.170696   44065 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 00:30:37.191023   44065 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 00:30:37.380285   44065 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 00:30:37.550184   44065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 00:30:37.566371   44065 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 00:30:37.586211   44065 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0416 00:30:37.586814   44065 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 00:30:37.586887   44065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:30:37.597670   44065 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 00:30:37.597736   44065 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:30:37.608584   44065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:30:37.619877   44065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:30:37.631362   44065 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 00:30:37.643227   44065 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:30:37.655073   44065 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:30:37.667756   44065 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:30:37.679019   44065 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 00:30:37.688890   44065 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0416 00:30:37.688987   44065 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 00:30:37.698889   44065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:30:37.834034   44065 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 00:30:38.110619   44065 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 00:30:38.110697   44065 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 00:30:38.115746   44065 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0416 00:30:38.115776   44065 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0416 00:30:38.115786   44065 command_runner.go:130] > Device: 0,22	Inode: 1384        Links: 1
	I0416 00:30:38.115795   44065 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0416 00:30:38.115807   44065 command_runner.go:130] > Access: 2024-04-16 00:30:37.959207173 +0000
	I0416 00:30:38.115817   44065 command_runner.go:130] > Modify: 2024-04-16 00:30:37.959207173 +0000
	I0416 00:30:38.115824   44065 command_runner.go:130] > Change: 2024-04-16 00:30:37.959207173 +0000
	I0416 00:30:38.115829   44065 command_runner.go:130] >  Birth: -
	I0416 00:30:38.115847   44065 start.go:562] Will wait 60s for crictl version
	I0416 00:30:38.115894   44065 ssh_runner.go:195] Run: which crictl
	I0416 00:30:38.119884   44065 command_runner.go:130] > /usr/bin/crictl
	I0416 00:30:38.119961   44065 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 00:30:38.165637   44065 command_runner.go:130] > Version:  0.1.0
	I0416 00:30:38.165659   44065 command_runner.go:130] > RuntimeName:  cri-o
	I0416 00:30:38.165727   44065 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0416 00:30:38.165766   44065 command_runner.go:130] > RuntimeApiVersion:  v1
	I0416 00:30:38.167033   44065 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 00:30:38.167114   44065 ssh_runner.go:195] Run: crio --version
	I0416 00:30:38.195845   44065 command_runner.go:130] > crio version 1.29.1
	I0416 00:30:38.195871   44065 command_runner.go:130] > Version:        1.29.1
	I0416 00:30:38.195881   44065 command_runner.go:130] > GitCommit:      unknown
	I0416 00:30:38.195887   44065 command_runner.go:130] > GitCommitDate:  unknown
	I0416 00:30:38.195891   44065 command_runner.go:130] > GitTreeState:   clean
	I0416 00:30:38.195897   44065 command_runner.go:130] > BuildDate:      2024-04-15T15:42:51Z
	I0416 00:30:38.195901   44065 command_runner.go:130] > GoVersion:      go1.21.6
	I0416 00:30:38.195905   44065 command_runner.go:130] > Compiler:       gc
	I0416 00:30:38.195909   44065 command_runner.go:130] > Platform:       linux/amd64
	I0416 00:30:38.195914   44065 command_runner.go:130] > Linkmode:       dynamic
	I0416 00:30:38.195921   44065 command_runner.go:130] > BuildTags:      
	I0416 00:30:38.195932   44065 command_runner.go:130] >   containers_image_ostree_stub
	I0416 00:30:38.195939   44065 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0416 00:30:38.195947   44065 command_runner.go:130] >   btrfs_noversion
	I0416 00:30:38.195954   44065 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0416 00:30:38.195967   44065 command_runner.go:130] >   libdm_no_deferred_remove
	I0416 00:30:38.195972   44065 command_runner.go:130] >   seccomp
	I0416 00:30:38.195977   44065 command_runner.go:130] > LDFlags:          unknown
	I0416 00:30:38.195982   44065 command_runner.go:130] > SeccompEnabled:   true
	I0416 00:30:38.195986   44065 command_runner.go:130] > AppArmorEnabled:  false
	I0416 00:30:38.196066   44065 ssh_runner.go:195] Run: crio --version
	I0416 00:30:38.227996   44065 command_runner.go:130] > crio version 1.29.1
	I0416 00:30:38.228018   44065 command_runner.go:130] > Version:        1.29.1
	I0416 00:30:38.228024   44065 command_runner.go:130] > GitCommit:      unknown
	I0416 00:30:38.228028   44065 command_runner.go:130] > GitCommitDate:  unknown
	I0416 00:30:38.228046   44065 command_runner.go:130] > GitTreeState:   clean
	I0416 00:30:38.228052   44065 command_runner.go:130] > BuildDate:      2024-04-15T15:42:51Z
	I0416 00:30:38.228057   44065 command_runner.go:130] > GoVersion:      go1.21.6
	I0416 00:30:38.228062   44065 command_runner.go:130] > Compiler:       gc
	I0416 00:30:38.228066   44065 command_runner.go:130] > Platform:       linux/amd64
	I0416 00:30:38.228071   44065 command_runner.go:130] > Linkmode:       dynamic
	I0416 00:30:38.228076   44065 command_runner.go:130] > BuildTags:      
	I0416 00:30:38.228081   44065 command_runner.go:130] >   containers_image_ostree_stub
	I0416 00:30:38.228085   44065 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0416 00:30:38.228092   44065 command_runner.go:130] >   btrfs_noversion
	I0416 00:30:38.228096   44065 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0416 00:30:38.228101   44065 command_runner.go:130] >   libdm_no_deferred_remove
	I0416 00:30:38.228104   44065 command_runner.go:130] >   seccomp
	I0416 00:30:38.228109   44065 command_runner.go:130] > LDFlags:          unknown
	I0416 00:30:38.228113   44065 command_runner.go:130] > SeccompEnabled:   true
	I0416 00:30:38.228118   44065 command_runner.go:130] > AppArmorEnabled:  false
	I0416 00:30:38.231733   44065 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 00:30:38.233308   44065 main.go:141] libmachine: (multinode-414194) Calling .GetIP
	I0416 00:30:38.235915   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:38.236292   44065 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:30:38.236313   44065 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:30:38.236534   44065 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 00:30:38.240626   44065 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0416 00:30:38.240783   44065 kubeadm.go:877] updating cluster {Name:multinode-414194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
29.3 ClusterName:multinode-414194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.81 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.64 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 00:30:38.240911   44065 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 00:30:38.240960   44065 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:30:38.284234   44065 command_runner.go:130] > {
	I0416 00:30:38.284255   44065 command_runner.go:130] >   "images": [
	I0416 00:30:38.284259   44065 command_runner.go:130] >     {
	I0416 00:30:38.284271   44065 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0416 00:30:38.284277   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.284284   44065 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0416 00:30:38.284288   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284292   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.284300   44065 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0416 00:30:38.284309   44065 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0416 00:30:38.284315   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284323   44065 command_runner.go:130] >       "size": "65291810",
	I0416 00:30:38.284329   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.284335   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.284343   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.284348   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.284352   44065 command_runner.go:130] >     },
	I0416 00:30:38.284355   44065 command_runner.go:130] >     {
	I0416 00:30:38.284361   44065 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0416 00:30:38.284365   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.284374   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0416 00:30:38.284384   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284391   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.284402   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0416 00:30:38.284415   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0416 00:30:38.284421   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284429   44065 command_runner.go:130] >       "size": "1363676",
	I0416 00:30:38.284435   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.284446   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.284450   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.284454   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.284457   44065 command_runner.go:130] >     },
	I0416 00:30:38.284461   44065 command_runner.go:130] >     {
	I0416 00:30:38.284468   44065 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0416 00:30:38.284472   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.284477   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0416 00:30:38.284481   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284488   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.284501   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0416 00:30:38.284517   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0416 00:30:38.284523   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284533   44065 command_runner.go:130] >       "size": "31470524",
	I0416 00:30:38.284539   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.284546   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.284554   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.284558   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.284564   44065 command_runner.go:130] >     },
	I0416 00:30:38.284567   44065 command_runner.go:130] >     {
	I0416 00:30:38.284575   44065 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0416 00:30:38.284585   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.284597   44065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0416 00:30:38.284606   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284614   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.284628   44065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0416 00:30:38.284649   44065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0416 00:30:38.284655   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284660   44065 command_runner.go:130] >       "size": "61245718",
	I0416 00:30:38.284676   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.284686   44065 command_runner.go:130] >       "username": "nonroot",
	I0416 00:30:38.284693   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.284703   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.284711   44065 command_runner.go:130] >     },
	I0416 00:30:38.284717   44065 command_runner.go:130] >     {
	I0416 00:30:38.284730   44065 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0416 00:30:38.284736   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.284744   44065 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0416 00:30:38.284750   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284760   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.284774   44065 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0416 00:30:38.284788   44065 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0416 00:30:38.284797   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284804   44065 command_runner.go:130] >       "size": "150779692",
	I0416 00:30:38.284813   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.284819   44065 command_runner.go:130] >         "value": "0"
	I0416 00:30:38.284826   44065 command_runner.go:130] >       },
	I0416 00:30:38.284830   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.284836   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.284844   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.284851   44065 command_runner.go:130] >     },
	I0416 00:30:38.284859   44065 command_runner.go:130] >     {
	I0416 00:30:38.284869   44065 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0416 00:30:38.284878   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.284886   44065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0416 00:30:38.284913   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284925   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.284940   44065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0416 00:30:38.284955   44065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0416 00:30:38.284964   44065 command_runner.go:130] >       ],
	I0416 00:30:38.284971   44065 command_runner.go:130] >       "size": "128508878",
	I0416 00:30:38.284979   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.284985   44065 command_runner.go:130] >         "value": "0"
	I0416 00:30:38.284999   44065 command_runner.go:130] >       },
	I0416 00:30:38.285003   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.285023   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.285034   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.285040   44065 command_runner.go:130] >     },
	I0416 00:30:38.285048   44065 command_runner.go:130] >     {
	I0416 00:30:38.285058   44065 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0416 00:30:38.285117   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.285131   44065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0416 00:30:38.285138   44065 command_runner.go:130] >       ],
	I0416 00:30:38.285144   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.285170   44065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0416 00:30:38.285186   44065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0416 00:30:38.285194   44065 command_runner.go:130] >       ],
	I0416 00:30:38.285200   44065 command_runner.go:130] >       "size": "123142962",
	I0416 00:30:38.285208   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.285214   44065 command_runner.go:130] >         "value": "0"
	I0416 00:30:38.285222   44065 command_runner.go:130] >       },
	I0416 00:30:38.285228   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.285236   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.285242   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.285250   44065 command_runner.go:130] >     },
	I0416 00:30:38.285255   44065 command_runner.go:130] >     {
	I0416 00:30:38.285269   44065 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0416 00:30:38.285278   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.285286   44065 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0416 00:30:38.285294   44065 command_runner.go:130] >       ],
	I0416 00:30:38.285301   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.285338   44065 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0416 00:30:38.285353   44065 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0416 00:30:38.285361   44065 command_runner.go:130] >       ],
	I0416 00:30:38.285369   44065 command_runner.go:130] >       "size": "83634073",
	I0416 00:30:38.285377   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.285381   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.285385   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.285389   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.285393   44065 command_runner.go:130] >     },
	I0416 00:30:38.285396   44065 command_runner.go:130] >     {
	I0416 00:30:38.285408   44065 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0416 00:30:38.285414   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.285421   44065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0416 00:30:38.285426   44065 command_runner.go:130] >       ],
	I0416 00:30:38.285432   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.285444   44065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0416 00:30:38.285456   44065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0416 00:30:38.285461   44065 command_runner.go:130] >       ],
	I0416 00:30:38.285468   44065 command_runner.go:130] >       "size": "60724018",
	I0416 00:30:38.285474   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.285479   44065 command_runner.go:130] >         "value": "0"
	I0416 00:30:38.285489   44065 command_runner.go:130] >       },
	I0416 00:30:38.285494   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.285499   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.285510   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.285517   44065 command_runner.go:130] >     },
	I0416 00:30:38.285525   44065 command_runner.go:130] >     {
	I0416 00:30:38.285536   44065 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0416 00:30:38.285545   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.285552   44065 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0416 00:30:38.285561   44065 command_runner.go:130] >       ],
	I0416 00:30:38.285570   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.285577   44065 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0416 00:30:38.285592   44065 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0416 00:30:38.285598   44065 command_runner.go:130] >       ],
	I0416 00:30:38.285606   44065 command_runner.go:130] >       "size": "750414",
	I0416 00:30:38.285615   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.285622   44065 command_runner.go:130] >         "value": "65535"
	I0416 00:30:38.285631   44065 command_runner.go:130] >       },
	I0416 00:30:38.285638   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.285647   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.285653   44065 command_runner.go:130] >       "pinned": true
	I0416 00:30:38.285660   44065 command_runner.go:130] >     }
	I0416 00:30:38.285663   44065 command_runner.go:130] >   ]
	I0416 00:30:38.285669   44065 command_runner.go:130] > }
	I0416 00:30:38.285940   44065 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 00:30:38.285957   44065 crio.go:433] Images already preloaded, skipping extraction
	I0416 00:30:38.286010   44065 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:30:38.321415   44065 command_runner.go:130] > {
	I0416 00:30:38.321434   44065 command_runner.go:130] >   "images": [
	I0416 00:30:38.321438   44065 command_runner.go:130] >     {
	I0416 00:30:38.321456   44065 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0416 00:30:38.321463   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.321476   44065 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0416 00:30:38.321482   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321488   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.321499   44065 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0416 00:30:38.321513   44065 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0416 00:30:38.321522   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321528   44065 command_runner.go:130] >       "size": "65291810",
	I0416 00:30:38.321534   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.321540   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.321554   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.321564   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.321569   44065 command_runner.go:130] >     },
	I0416 00:30:38.321577   44065 command_runner.go:130] >     {
	I0416 00:30:38.321586   44065 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0416 00:30:38.321596   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.321602   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0416 00:30:38.321609   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321613   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.321623   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0416 00:30:38.321630   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0416 00:30:38.321636   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321640   44065 command_runner.go:130] >       "size": "1363676",
	I0416 00:30:38.321644   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.321652   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.321658   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.321666   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.321672   44065 command_runner.go:130] >     },
	I0416 00:30:38.321678   44065 command_runner.go:130] >     {
	I0416 00:30:38.321696   44065 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0416 00:30:38.321704   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.321712   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0416 00:30:38.321717   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321721   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.321731   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0416 00:30:38.321740   44065 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0416 00:30:38.321745   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321750   44065 command_runner.go:130] >       "size": "31470524",
	I0416 00:30:38.321756   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.321760   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.321766   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.321770   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.321775   44065 command_runner.go:130] >     },
	I0416 00:30:38.321779   44065 command_runner.go:130] >     {
	I0416 00:30:38.321787   44065 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0416 00:30:38.321793   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.321798   44065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0416 00:30:38.321803   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321807   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.321816   44065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0416 00:30:38.321828   44065 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0416 00:30:38.321834   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321838   44065 command_runner.go:130] >       "size": "61245718",
	I0416 00:30:38.321842   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.321845   44065 command_runner.go:130] >       "username": "nonroot",
	I0416 00:30:38.321849   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.321853   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.321856   44065 command_runner.go:130] >     },
	I0416 00:30:38.321860   44065 command_runner.go:130] >     {
	I0416 00:30:38.321866   44065 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0416 00:30:38.321873   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.321878   44065 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0416 00:30:38.321883   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321887   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.321896   44065 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0416 00:30:38.321909   44065 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0416 00:30:38.321916   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321920   44065 command_runner.go:130] >       "size": "150779692",
	I0416 00:30:38.321926   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.321930   44065 command_runner.go:130] >         "value": "0"
	I0416 00:30:38.321935   44065 command_runner.go:130] >       },
	I0416 00:30:38.321939   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.321945   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.321949   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.321955   44065 command_runner.go:130] >     },
	I0416 00:30:38.321958   44065 command_runner.go:130] >     {
	I0416 00:30:38.321966   44065 command_runner.go:130] >       "id": "39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533",
	I0416 00:30:38.321973   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.321978   44065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.29.3"
	I0416 00:30:38.321983   44065 command_runner.go:130] >       ],
	I0416 00:30:38.321991   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.322000   44065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322",
	I0416 00:30:38.322008   44065 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"
	I0416 00:30:38.322013   44065 command_runner.go:130] >       ],
	I0416 00:30:38.322018   44065 command_runner.go:130] >       "size": "128508878",
	I0416 00:30:38.322023   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.322027   44065 command_runner.go:130] >         "value": "0"
	I0416 00:30:38.322033   44065 command_runner.go:130] >       },
	I0416 00:30:38.322037   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.322043   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.322047   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.322052   44065 command_runner.go:130] >     },
	I0416 00:30:38.322056   44065 command_runner.go:130] >     {
	I0416 00:30:38.322064   44065 command_runner.go:130] >       "id": "6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3",
	I0416 00:30:38.322070   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.322076   44065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.29.3"
	I0416 00:30:38.322081   44065 command_runner.go:130] >       ],
	I0416 00:30:38.322085   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.322095   44065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606",
	I0416 00:30:38.322102   44065 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"
	I0416 00:30:38.322108   44065 command_runner.go:130] >       ],
	I0416 00:30:38.322117   44065 command_runner.go:130] >       "size": "123142962",
	I0416 00:30:38.322123   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.322127   44065 command_runner.go:130] >         "value": "0"
	I0416 00:30:38.322133   44065 command_runner.go:130] >       },
	I0416 00:30:38.322137   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.322141   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.322145   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.322148   44065 command_runner.go:130] >     },
	I0416 00:30:38.322151   44065 command_runner.go:130] >     {
	I0416 00:30:38.322157   44065 command_runner.go:130] >       "id": "a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392",
	I0416 00:30:38.322163   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.322168   44065 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.29.3"
	I0416 00:30:38.322173   44065 command_runner.go:130] >       ],
	I0416 00:30:38.322177   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.322200   44065 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d",
	I0416 00:30:38.322210   44065 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"
	I0416 00:30:38.322213   44065 command_runner.go:130] >       ],
	I0416 00:30:38.322217   44065 command_runner.go:130] >       "size": "83634073",
	I0416 00:30:38.322223   44065 command_runner.go:130] >       "uid": null,
	I0416 00:30:38.322227   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.322233   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.322236   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.322240   44065 command_runner.go:130] >     },
	I0416 00:30:38.322243   44065 command_runner.go:130] >     {
	I0416 00:30:38.322249   44065 command_runner.go:130] >       "id": "8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b",
	I0416 00:30:38.322255   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.322259   44065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.29.3"
	I0416 00:30:38.322265   44065 command_runner.go:130] >       ],
	I0416 00:30:38.322269   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.322278   44065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a",
	I0416 00:30:38.322288   44065 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"
	I0416 00:30:38.322293   44065 command_runner.go:130] >       ],
	I0416 00:30:38.322298   44065 command_runner.go:130] >       "size": "60724018",
	I0416 00:30:38.322304   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.322308   44065 command_runner.go:130] >         "value": "0"
	I0416 00:30:38.322314   44065 command_runner.go:130] >       },
	I0416 00:30:38.322322   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.322328   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.322332   44065 command_runner.go:130] >       "pinned": false
	I0416 00:30:38.322336   44065 command_runner.go:130] >     },
	I0416 00:30:38.322339   44065 command_runner.go:130] >     {
	I0416 00:30:38.322347   44065 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0416 00:30:38.322352   44065 command_runner.go:130] >       "repoTags": [
	I0416 00:30:38.322356   44065 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0416 00:30:38.322361   44065 command_runner.go:130] >       ],
	I0416 00:30:38.322365   44065 command_runner.go:130] >       "repoDigests": [
	I0416 00:30:38.322372   44065 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0416 00:30:38.322381   44065 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0416 00:30:38.322387   44065 command_runner.go:130] >       ],
	I0416 00:30:38.322391   44065 command_runner.go:130] >       "size": "750414",
	I0416 00:30:38.322397   44065 command_runner.go:130] >       "uid": {
	I0416 00:30:38.322401   44065 command_runner.go:130] >         "value": "65535"
	I0416 00:30:38.322407   44065 command_runner.go:130] >       },
	I0416 00:30:38.322417   44065 command_runner.go:130] >       "username": "",
	I0416 00:30:38.322423   44065 command_runner.go:130] >       "spec": null,
	I0416 00:30:38.322427   44065 command_runner.go:130] >       "pinned": true
	I0416 00:30:38.322432   44065 command_runner.go:130] >     }
	I0416 00:30:38.322436   44065 command_runner.go:130] >   ]
	I0416 00:30:38.322441   44065 command_runner.go:130] > }
	I0416 00:30:38.322537   44065 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 00:30:38.322547   44065 cache_images.go:84] Images are preloaded, skipping loading
	I0416 00:30:38.322553   44065 kubeadm.go:928] updating node { 192.168.39.140 8443 v1.29.3 crio true true} ...
	I0416 00:30:38.322643   44065 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-414194 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:multinode-414194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 00:30:38.322709   44065 ssh_runner.go:195] Run: crio config
	I0416 00:30:38.357680   44065 command_runner.go:130] ! time="2024-04-16 00:30:38.339035704Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0416 00:30:38.363049   44065 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0416 00:30:38.370920   44065 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0416 00:30:38.370944   44065 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0416 00:30:38.370950   44065 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0416 00:30:38.370953   44065 command_runner.go:130] > #
	I0416 00:30:38.370960   44065 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0416 00:30:38.370965   44065 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0416 00:30:38.370971   44065 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0416 00:30:38.370985   44065 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0416 00:30:38.370994   44065 command_runner.go:130] > # reload'.
	I0416 00:30:38.371004   44065 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0416 00:30:38.371018   44065 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0416 00:30:38.371031   44065 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0416 00:30:38.371041   44065 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0416 00:30:38.371047   44065 command_runner.go:130] > [crio]
	I0416 00:30:38.371053   44065 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0416 00:30:38.371059   44065 command_runner.go:130] > # containers images, in this directory.
	I0416 00:30:38.371070   44065 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0416 00:30:38.371089   44065 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0416 00:30:38.371101   44065 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0416 00:30:38.371114   44065 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0416 00:30:38.371123   44065 command_runner.go:130] > # imagestore = ""
	I0416 00:30:38.371133   44065 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0416 00:30:38.371144   44065 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0416 00:30:38.371151   44065 command_runner.go:130] > storage_driver = "overlay"
	I0416 00:30:38.371157   44065 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0416 00:30:38.371171   44065 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0416 00:30:38.371182   44065 command_runner.go:130] > storage_option = [
	I0416 00:30:38.371192   44065 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0416 00:30:38.371200   44065 command_runner.go:130] > ]
	I0416 00:30:38.371211   44065 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0416 00:30:38.371223   44065 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0416 00:30:38.371233   44065 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0416 00:30:38.371244   44065 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0416 00:30:38.371251   44065 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0416 00:30:38.371261   44065 command_runner.go:130] > # always happen on a node reboot
	I0416 00:30:38.371272   44065 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0416 00:30:38.371291   44065 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0416 00:30:38.371302   44065 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0416 00:30:38.371310   44065 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0416 00:30:38.371321   44065 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0416 00:30:38.371329   44065 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0416 00:30:38.371341   44065 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0416 00:30:38.371350   44065 command_runner.go:130] > # internal_wipe = true
	I0416 00:30:38.371376   44065 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0416 00:30:38.371388   44065 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0416 00:30:38.371398   44065 command_runner.go:130] > # internal_repair = false
	I0416 00:30:38.371406   44065 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0416 00:30:38.371417   44065 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0416 00:30:38.371427   44065 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0416 00:30:38.371435   44065 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0416 00:30:38.371448   44065 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0416 00:30:38.371457   44065 command_runner.go:130] > [crio.api]
	I0416 00:30:38.371465   44065 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0416 00:30:38.371475   44065 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0416 00:30:38.371491   44065 command_runner.go:130] > # IP address on which the stream server will listen.
	I0416 00:30:38.371500   44065 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0416 00:30:38.371507   44065 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0416 00:30:38.371513   44065 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0416 00:30:38.371519   44065 command_runner.go:130] > # stream_port = "0"
	I0416 00:30:38.371531   44065 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0416 00:30:38.371541   44065 command_runner.go:130] > # stream_enable_tls = false
	I0416 00:30:38.371550   44065 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0416 00:30:38.371560   44065 command_runner.go:130] > # stream_idle_timeout = ""
	I0416 00:30:38.371569   44065 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0416 00:30:38.371578   44065 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0416 00:30:38.371583   44065 command_runner.go:130] > # minutes.
	I0416 00:30:38.371589   44065 command_runner.go:130] > # stream_tls_cert = ""
	I0416 00:30:38.371595   44065 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0416 00:30:38.371603   44065 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0416 00:30:38.371610   44065 command_runner.go:130] > # stream_tls_key = ""
	I0416 00:30:38.371619   44065 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0416 00:30:38.371633   44065 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0416 00:30:38.371657   44065 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0416 00:30:38.371667   44065 command_runner.go:130] > # stream_tls_ca = ""
	I0416 00:30:38.371676   44065 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0416 00:30:38.371683   44065 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0416 00:30:38.371693   44065 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0416 00:30:38.371704   44065 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0416 00:30:38.371718   44065 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0416 00:30:38.371736   44065 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0416 00:30:38.371745   44065 command_runner.go:130] > [crio.runtime]
	I0416 00:30:38.371755   44065 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0416 00:30:38.371764   44065 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0416 00:30:38.371768   44065 command_runner.go:130] > # "nofile=1024:2048"
	I0416 00:30:38.371780   44065 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0416 00:30:38.371787   44065 command_runner.go:130] > # default_ulimits = [
	I0416 00:30:38.371794   44065 command_runner.go:130] > # ]
	I0416 00:30:38.371803   44065 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0416 00:30:38.371812   44065 command_runner.go:130] > # no_pivot = false
	I0416 00:30:38.371821   44065 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0416 00:30:38.371833   44065 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0416 00:30:38.371843   44065 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0416 00:30:38.371851   44065 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0416 00:30:38.371859   44065 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0416 00:30:38.371870   44065 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0416 00:30:38.371880   44065 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0416 00:30:38.371888   44065 command_runner.go:130] > # Cgroup setting for conmon
	I0416 00:30:38.371901   44065 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0416 00:30:38.371910   44065 command_runner.go:130] > conmon_cgroup = "pod"
	I0416 00:30:38.371920   44065 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0416 00:30:38.371931   44065 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0416 00:30:38.371945   44065 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0416 00:30:38.371954   44065 command_runner.go:130] > conmon_env = [
	I0416 00:30:38.371964   44065 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0416 00:30:38.371972   44065 command_runner.go:130] > ]
	I0416 00:30:38.371980   44065 command_runner.go:130] > # Additional environment variables to set for all the
	I0416 00:30:38.371990   44065 command_runner.go:130] > # containers. These are overridden if set in the
	I0416 00:30:38.372002   44065 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0416 00:30:38.372012   44065 command_runner.go:130] > # default_env = [
	I0416 00:30:38.372017   44065 command_runner.go:130] > # ]
	I0416 00:30:38.372026   44065 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0416 00:30:38.372035   44065 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0416 00:30:38.372041   44065 command_runner.go:130] > # selinux = false
	I0416 00:30:38.372051   44065 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0416 00:30:38.372065   44065 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0416 00:30:38.372083   44065 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0416 00:30:38.372093   44065 command_runner.go:130] > # seccomp_profile = ""
	I0416 00:30:38.372105   44065 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0416 00:30:38.372114   44065 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0416 00:30:38.372123   44065 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0416 00:30:38.372133   44065 command_runner.go:130] > # which might increase security.
	I0416 00:30:38.372140   44065 command_runner.go:130] > # This option is currently deprecated,
	I0416 00:30:38.372153   44065 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0416 00:30:38.372163   44065 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0416 00:30:38.372176   44065 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0416 00:30:38.372188   44065 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0416 00:30:38.372198   44065 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0416 00:30:38.372206   44065 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0416 00:30:38.372217   44065 command_runner.go:130] > # This option supports live configuration reload.
	I0416 00:30:38.372234   44065 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0416 00:30:38.372246   44065 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0416 00:30:38.372252   44065 command_runner.go:130] > # the cgroup blockio controller.
	I0416 00:30:38.372262   44065 command_runner.go:130] > # blockio_config_file = ""
	I0416 00:30:38.372272   44065 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0416 00:30:38.372280   44065 command_runner.go:130] > # blockio parameters.
	I0416 00:30:38.372284   44065 command_runner.go:130] > # blockio_reload = false
	I0416 00:30:38.372291   44065 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0416 00:30:38.372297   44065 command_runner.go:130] > # irqbalance daemon.
	I0416 00:30:38.372306   44065 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0416 00:30:38.372319   44065 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0416 00:30:38.372333   44065 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0416 00:30:38.372346   44065 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0416 00:30:38.372356   44065 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0416 00:30:38.372367   44065 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0416 00:30:38.372376   44065 command_runner.go:130] > # This option supports live configuration reload.
	I0416 00:30:38.372383   44065 command_runner.go:130] > # rdt_config_file = ""
	I0416 00:30:38.372395   44065 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0416 00:30:38.372405   44065 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0416 00:30:38.372448   44065 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0416 00:30:38.372455   44065 command_runner.go:130] > # separate_pull_cgroup = ""
	I0416 00:30:38.372462   44065 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0416 00:30:38.372479   44065 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0416 00:30:38.372495   44065 command_runner.go:130] > # will be added.
	I0416 00:30:38.372505   44065 command_runner.go:130] > # default_capabilities = [
	I0416 00:30:38.372511   44065 command_runner.go:130] > # 	"CHOWN",
	I0416 00:30:38.372520   44065 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0416 00:30:38.372527   44065 command_runner.go:130] > # 	"FSETID",
	I0416 00:30:38.372535   44065 command_runner.go:130] > # 	"FOWNER",
	I0416 00:30:38.372541   44065 command_runner.go:130] > # 	"SETGID",
	I0416 00:30:38.372545   44065 command_runner.go:130] > # 	"SETUID",
	I0416 00:30:38.372550   44065 command_runner.go:130] > # 	"SETPCAP",
	I0416 00:30:38.372557   44065 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0416 00:30:38.372566   44065 command_runner.go:130] > # 	"KILL",
	I0416 00:30:38.372571   44065 command_runner.go:130] > # ]
	I0416 00:30:38.372585   44065 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0416 00:30:38.372598   44065 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0416 00:30:38.372608   44065 command_runner.go:130] > # add_inheritable_capabilities = false
	I0416 00:30:38.372620   44065 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0416 00:30:38.372627   44065 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0416 00:30:38.372635   44065 command_runner.go:130] > default_sysctls = [
	I0416 00:30:38.372642   44065 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0416 00:30:38.372649   44065 command_runner.go:130] > ]
	I0416 00:30:38.372657   44065 command_runner.go:130] > # List of devices on the host that a
	I0416 00:30:38.372669   44065 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0416 00:30:38.372678   44065 command_runner.go:130] > # allowed_devices = [
	I0416 00:30:38.372685   44065 command_runner.go:130] > # 	"/dev/fuse",
	I0416 00:30:38.372693   44065 command_runner.go:130] > # ]
	I0416 00:30:38.372701   44065 command_runner.go:130] > # List of additional devices. specified as
	I0416 00:30:38.372713   44065 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0416 00:30:38.372720   44065 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0416 00:30:38.372729   44065 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0416 00:30:38.372739   44065 command_runner.go:130] > # additional_devices = [
	I0416 00:30:38.372744   44065 command_runner.go:130] > # ]
	I0416 00:30:38.372756   44065 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0416 00:30:38.372762   44065 command_runner.go:130] > # cdi_spec_dirs = [
	I0416 00:30:38.372771   44065 command_runner.go:130] > # 	"/etc/cdi",
	I0416 00:30:38.372776   44065 command_runner.go:130] > # 	"/var/run/cdi",
	I0416 00:30:38.372790   44065 command_runner.go:130] > # ]
	I0416 00:30:38.372802   44065 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0416 00:30:38.372812   44065 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0416 00:30:38.372820   44065 command_runner.go:130] > # Defaults to false.
	I0416 00:30:38.372832   44065 command_runner.go:130] > # device_ownership_from_security_context = false
	I0416 00:30:38.372845   44065 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0416 00:30:38.372857   44065 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0416 00:30:38.372863   44065 command_runner.go:130] > # hooks_dir = [
	I0416 00:30:38.372873   44065 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0416 00:30:38.372878   44065 command_runner.go:130] > # ]
	I0416 00:30:38.372886   44065 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0416 00:30:38.372898   44065 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0416 00:30:38.372910   44065 command_runner.go:130] > # its default mounts from the following two files:
	I0416 00:30:38.372918   44065 command_runner.go:130] > #
	I0416 00:30:38.372928   44065 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0416 00:30:38.372940   44065 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0416 00:30:38.372952   44065 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0416 00:30:38.372960   44065 command_runner.go:130] > #
	I0416 00:30:38.372968   44065 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0416 00:30:38.372977   44065 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0416 00:30:38.372987   44065 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0416 00:30:38.372999   44065 command_runner.go:130] > #      only add mounts it finds in this file.
	I0416 00:30:38.373003   44065 command_runner.go:130] > #
	I0416 00:30:38.373010   44065 command_runner.go:130] > # default_mounts_file = ""
	I0416 00:30:38.373018   44065 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0416 00:30:38.373028   44065 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0416 00:30:38.373034   44065 command_runner.go:130] > pids_limit = 1024
	I0416 00:30:38.373051   44065 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0416 00:30:38.373062   44065 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0416 00:30:38.373068   44065 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0416 00:30:38.373083   44065 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0416 00:30:38.373093   44065 command_runner.go:130] > # log_size_max = -1
	I0416 00:30:38.373104   44065 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0416 00:30:38.373114   44065 command_runner.go:130] > # log_to_journald = false
	I0416 00:30:38.373123   44065 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0416 00:30:38.373133   44065 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0416 00:30:38.373148   44065 command_runner.go:130] > # Path to directory for container attach sockets.
	I0416 00:30:38.373168   44065 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0416 00:30:38.373181   44065 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0416 00:30:38.373195   44065 command_runner.go:130] > # bind_mount_prefix = ""
	I0416 00:30:38.373210   44065 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0416 00:30:38.373220   44065 command_runner.go:130] > # read_only = false
	I0416 00:30:38.373232   44065 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0416 00:30:38.373242   44065 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0416 00:30:38.373250   44065 command_runner.go:130] > # live configuration reload.
	I0416 00:30:38.373255   44065 command_runner.go:130] > # log_level = "info"
	I0416 00:30:38.373265   44065 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0416 00:30:38.373277   44065 command_runner.go:130] > # This option supports live configuration reload.
	I0416 00:30:38.373286   44065 command_runner.go:130] > # log_filter = ""
	I0416 00:30:38.373296   44065 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0416 00:30:38.373310   44065 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0416 00:30:38.373319   44065 command_runner.go:130] > # separated by comma.
	I0416 00:30:38.373330   44065 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0416 00:30:38.373338   44065 command_runner.go:130] > # uid_mappings = ""
	I0416 00:30:38.373347   44065 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0416 00:30:38.373360   44065 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0416 00:30:38.373370   44065 command_runner.go:130] > # separated by comma.
	I0416 00:30:38.373382   44065 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0416 00:30:38.373394   44065 command_runner.go:130] > # gid_mappings = ""
	I0416 00:30:38.373404   44065 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0416 00:30:38.373412   44065 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0416 00:30:38.373418   44065 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0416 00:30:38.373428   44065 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0416 00:30:38.373439   44065 command_runner.go:130] > # minimum_mappable_uid = -1
	I0416 00:30:38.373448   44065 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0416 00:30:38.373461   44065 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0416 00:30:38.373471   44065 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0416 00:30:38.373486   44065 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0416 00:30:38.373498   44065 command_runner.go:130] > # minimum_mappable_gid = -1
	I0416 00:30:38.373504   44065 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0416 00:30:38.373516   44065 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0416 00:30:38.373528   44065 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0416 00:30:38.373541   44065 command_runner.go:130] > # ctr_stop_timeout = 30
	I0416 00:30:38.373554   44065 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0416 00:30:38.373564   44065 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0416 00:30:38.373572   44065 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0416 00:30:38.373582   44065 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0416 00:30:38.373586   44065 command_runner.go:130] > drop_infra_ctr = false
	I0416 00:30:38.373597   44065 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0416 00:30:38.373609   44065 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0416 00:30:38.373623   44065 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0416 00:30:38.373638   44065 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0416 00:30:38.373652   44065 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0416 00:30:38.373664   44065 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0416 00:30:38.373671   44065 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0416 00:30:38.373679   44065 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0416 00:30:38.373686   44065 command_runner.go:130] > # shared_cpuset = ""
	I0416 00:30:38.373699   44065 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0416 00:30:38.373710   44065 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0416 00:30:38.373721   44065 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0416 00:30:38.373732   44065 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0416 00:30:38.373742   44065 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0416 00:30:38.373753   44065 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0416 00:30:38.373761   44065 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0416 00:30:38.373768   44065 command_runner.go:130] > # enable_criu_support = false
	I0416 00:30:38.373775   44065 command_runner.go:130] > # Enable/disable the generation of the container,
	I0416 00:30:38.373788   44065 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0416 00:30:38.373798   44065 command_runner.go:130] > # enable_pod_events = false
	I0416 00:30:38.373811   44065 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0416 00:30:38.373823   44065 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0416 00:30:38.373834   44065 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0416 00:30:38.373843   44065 command_runner.go:130] > # default_runtime = "runc"
	I0416 00:30:38.373848   44065 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0416 00:30:38.373858   44065 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0416 00:30:38.373874   44065 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0416 00:30:38.373886   44065 command_runner.go:130] > # creation as a file is not desired either.
	I0416 00:30:38.373901   44065 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0416 00:30:38.373912   44065 command_runner.go:130] > # the hostname is being managed dynamically.
	I0416 00:30:38.373929   44065 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0416 00:30:38.373936   44065 command_runner.go:130] > # ]
	I0416 00:30:38.373944   44065 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0416 00:30:38.373959   44065 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0416 00:30:38.373971   44065 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0416 00:30:38.373983   44065 command_runner.go:130] > # Each entry in the table should follow the format:
	I0416 00:30:38.373990   44065 command_runner.go:130] > #
	I0416 00:30:38.373997   44065 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0416 00:30:38.374008   44065 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0416 00:30:38.374060   44065 command_runner.go:130] > # runtime_type = "oci"
	I0416 00:30:38.374072   44065 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0416 00:30:38.374080   44065 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0416 00:30:38.374087   44065 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0416 00:30:38.374095   44065 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0416 00:30:38.374101   44065 command_runner.go:130] > # monitor_env = []
	I0416 00:30:38.374109   44065 command_runner.go:130] > # privileged_without_host_devices = false
	I0416 00:30:38.374114   44065 command_runner.go:130] > # allowed_annotations = []
	I0416 00:30:38.374124   44065 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0416 00:30:38.374133   44065 command_runner.go:130] > # Where:
	I0416 00:30:38.374142   44065 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0416 00:30:38.374154   44065 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0416 00:30:38.374167   44065 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0416 00:30:38.374179   44065 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0416 00:30:38.374185   44065 command_runner.go:130] > #   in $PATH.
	I0416 00:30:38.374196   44065 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0416 00:30:38.374200   44065 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0416 00:30:38.374217   44065 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0416 00:30:38.374227   44065 command_runner.go:130] > #   state.
	I0416 00:30:38.374238   44065 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0416 00:30:38.374250   44065 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0416 00:30:38.374263   44065 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0416 00:30:38.374274   44065 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0416 00:30:38.374294   44065 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0416 00:30:38.374307   44065 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0416 00:30:38.374317   44065 command_runner.go:130] > #   The currently recognized values are:
	I0416 00:30:38.374329   44065 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0416 00:30:38.374351   44065 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0416 00:30:38.374363   44065 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0416 00:30:38.374373   44065 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0416 00:30:38.374382   44065 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0416 00:30:38.374396   44065 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0416 00:30:38.374409   44065 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0416 00:30:38.374422   44065 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0416 00:30:38.374434   44065 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0416 00:30:38.374447   44065 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0416 00:30:38.374455   44065 command_runner.go:130] > #   deprecated option "conmon".
	I0416 00:30:38.374462   44065 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0416 00:30:38.374472   44065 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0416 00:30:38.374483   44065 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0416 00:30:38.374497   44065 command_runner.go:130] > #   should be moved to the container's cgroup
	I0416 00:30:38.374510   44065 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0416 00:30:38.374521   44065 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0416 00:30:38.374531   44065 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0416 00:30:38.374541   44065 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0416 00:30:38.374544   44065 command_runner.go:130] > #
	I0416 00:30:38.374549   44065 command_runner.go:130] > # Using the seccomp notifier feature:
	I0416 00:30:38.374557   44065 command_runner.go:130] > #
	I0416 00:30:38.374567   44065 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0416 00:30:38.374580   44065 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0416 00:30:38.374585   44065 command_runner.go:130] > #
	I0416 00:30:38.374595   44065 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0416 00:30:38.374608   44065 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0416 00:30:38.374615   44065 command_runner.go:130] > #
	I0416 00:30:38.374625   44065 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0416 00:30:38.374632   44065 command_runner.go:130] > # feature.
	I0416 00:30:38.374635   44065 command_runner.go:130] > #
	I0416 00:30:38.374645   44065 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0416 00:30:38.374658   44065 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0416 00:30:38.374671   44065 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0416 00:30:38.374683   44065 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0416 00:30:38.374692   44065 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0416 00:30:38.374700   44065 command_runner.go:130] > #
	I0416 00:30:38.374714   44065 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0416 00:30:38.374724   44065 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0416 00:30:38.374731   44065 command_runner.go:130] > #
	I0416 00:30:38.374742   44065 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0416 00:30:38.374753   44065 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0416 00:30:38.374761   44065 command_runner.go:130] > #
	I0416 00:30:38.374771   44065 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0416 00:30:38.374784   44065 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0416 00:30:38.374790   44065 command_runner.go:130] > # limitation.
	I0416 00:30:38.374799   44065 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0416 00:30:38.374804   44065 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0416 00:30:38.374812   44065 command_runner.go:130] > runtime_type = "oci"
	I0416 00:30:38.374820   44065 command_runner.go:130] > runtime_root = "/run/runc"
	I0416 00:30:38.374831   44065 command_runner.go:130] > runtime_config_path = ""
	I0416 00:30:38.374839   44065 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0416 00:30:38.374849   44065 command_runner.go:130] > monitor_cgroup = "pod"
	I0416 00:30:38.374855   44065 command_runner.go:130] > monitor_exec_cgroup = ""
	I0416 00:30:38.374864   44065 command_runner.go:130] > monitor_env = [
	I0416 00:30:38.374873   44065 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0416 00:30:38.374880   44065 command_runner.go:130] > ]
	I0416 00:30:38.374886   44065 command_runner.go:130] > privileged_without_host_devices = false
	I0416 00:30:38.374893   44065 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0416 00:30:38.374904   44065 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0416 00:30:38.374918   44065 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0416 00:30:38.374933   44065 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0416 00:30:38.374947   44065 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0416 00:30:38.374958   44065 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0416 00:30:38.374972   44065 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0416 00:30:38.374985   44065 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0416 00:30:38.374994   44065 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0416 00:30:38.375009   44065 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0416 00:30:38.375018   44065 command_runner.go:130] > # Example:
	I0416 00:30:38.375025   44065 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0416 00:30:38.375036   44065 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0416 00:30:38.375044   44065 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0416 00:30:38.375055   44065 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0416 00:30:38.375067   44065 command_runner.go:130] > # cpuset = 0
	I0416 00:30:38.375076   44065 command_runner.go:130] > # cpushares = "0-1"
	I0416 00:30:38.375081   44065 command_runner.go:130] > # Where:
	I0416 00:30:38.375090   44065 command_runner.go:130] > # The workload name is workload-type.
	I0416 00:30:38.375101   44065 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0416 00:30:38.375113   44065 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0416 00:30:38.375123   44065 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0416 00:30:38.375137   44065 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0416 00:30:38.375148   44065 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0416 00:30:38.375157   44065 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0416 00:30:38.375167   44065 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0416 00:30:38.375178   44065 command_runner.go:130] > # Default value is set to true
	I0416 00:30:38.375186   44065 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0416 00:30:38.375197   44065 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0416 00:30:38.375207   44065 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0416 00:30:38.375217   44065 command_runner.go:130] > # Default value is set to 'false'
	I0416 00:30:38.375224   44065 command_runner.go:130] > # disable_hostport_mapping = false
	I0416 00:30:38.375235   44065 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0416 00:30:38.375240   44065 command_runner.go:130] > #
	I0416 00:30:38.375246   44065 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0416 00:30:38.375251   44065 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0416 00:30:38.375257   44065 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0416 00:30:38.375265   44065 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0416 00:30:38.375274   44065 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0416 00:30:38.375279   44065 command_runner.go:130] > [crio.image]
	I0416 00:30:38.375288   44065 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0416 00:30:38.375295   44065 command_runner.go:130] > # default_transport = "docker://"
	I0416 00:30:38.375305   44065 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0416 00:30:38.375314   44065 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0416 00:30:38.375320   44065 command_runner.go:130] > # global_auth_file = ""
	I0416 00:30:38.375328   44065 command_runner.go:130] > # The image used to instantiate infra containers.
	I0416 00:30:38.375335   44065 command_runner.go:130] > # This option supports live configuration reload.
	I0416 00:30:38.375340   44065 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0416 00:30:38.375346   44065 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0416 00:30:38.375351   44065 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0416 00:30:38.375356   44065 command_runner.go:130] > # This option supports live configuration reload.
	I0416 00:30:38.375367   44065 command_runner.go:130] > # pause_image_auth_file = ""
	I0416 00:30:38.375372   44065 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0416 00:30:38.375378   44065 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0416 00:30:38.375383   44065 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0416 00:30:38.375388   44065 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0416 00:30:38.375395   44065 command_runner.go:130] > # pause_command = "/pause"
	I0416 00:30:38.375400   44065 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0416 00:30:38.375405   44065 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0416 00:30:38.375412   44065 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0416 00:30:38.375425   44065 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0416 00:30:38.375434   44065 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0416 00:30:38.375444   44065 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0416 00:30:38.375451   44065 command_runner.go:130] > # pinned_images = [
	I0416 00:30:38.375460   44065 command_runner.go:130] > # ]
	I0416 00:30:38.375470   44065 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0416 00:30:38.375482   44065 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0416 00:30:38.375494   44065 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0416 00:30:38.375503   44065 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0416 00:30:38.375508   44065 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0416 00:30:38.375514   44065 command_runner.go:130] > # signature_policy = ""
	I0416 00:30:38.375520   44065 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0416 00:30:38.375526   44065 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0416 00:30:38.375532   44065 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0416 00:30:38.375538   44065 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0416 00:30:38.375547   44065 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0416 00:30:38.375555   44065 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0416 00:30:38.375560   44065 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0416 00:30:38.375573   44065 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0416 00:30:38.375579   44065 command_runner.go:130] > # changing them here.
	I0416 00:30:38.375583   44065 command_runner.go:130] > # insecure_registries = [
	I0416 00:30:38.375586   44065 command_runner.go:130] > # ]
	I0416 00:30:38.375592   44065 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0416 00:30:38.375599   44065 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0416 00:30:38.375603   44065 command_runner.go:130] > # image_volumes = "mkdir"
	I0416 00:30:38.375609   44065 command_runner.go:130] > # Temporary directory to use for storing big files
	I0416 00:30:38.375613   44065 command_runner.go:130] > # big_files_temporary_dir = ""
	I0416 00:30:38.375625   44065 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0416 00:30:38.375630   44065 command_runner.go:130] > # CNI plugins.
	I0416 00:30:38.375634   44065 command_runner.go:130] > [crio.network]
	I0416 00:30:38.375643   44065 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0416 00:30:38.375656   44065 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0416 00:30:38.375662   44065 command_runner.go:130] > # cni_default_network = ""
	I0416 00:30:38.375667   44065 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0416 00:30:38.375674   44065 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0416 00:30:38.375680   44065 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0416 00:30:38.375685   44065 command_runner.go:130] > # plugin_dirs = [
	I0416 00:30:38.375689   44065 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0416 00:30:38.375694   44065 command_runner.go:130] > # ]
	I0416 00:30:38.375700   44065 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0416 00:30:38.375705   44065 command_runner.go:130] > [crio.metrics]
	I0416 00:30:38.375710   44065 command_runner.go:130] > # Globally enable or disable metrics support.
	I0416 00:30:38.375716   44065 command_runner.go:130] > enable_metrics = true
	I0416 00:30:38.375721   44065 command_runner.go:130] > # Specify enabled metrics collectors.
	I0416 00:30:38.375727   44065 command_runner.go:130] > # Per default all metrics are enabled.
	I0416 00:30:38.375733   44065 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0416 00:30:38.375739   44065 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0416 00:30:38.375745   44065 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0416 00:30:38.375749   44065 command_runner.go:130] > # metrics_collectors = [
	I0416 00:30:38.375755   44065 command_runner.go:130] > # 	"operations",
	I0416 00:30:38.375759   44065 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0416 00:30:38.375763   44065 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0416 00:30:38.375769   44065 command_runner.go:130] > # 	"operations_errors",
	I0416 00:30:38.375773   44065 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0416 00:30:38.375778   44065 command_runner.go:130] > # 	"image_pulls_by_name",
	I0416 00:30:38.375782   44065 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0416 00:30:38.375788   44065 command_runner.go:130] > # 	"image_pulls_failures",
	I0416 00:30:38.375792   44065 command_runner.go:130] > # 	"image_pulls_successes",
	I0416 00:30:38.375797   44065 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0416 00:30:38.375802   44065 command_runner.go:130] > # 	"image_layer_reuse",
	I0416 00:30:38.375806   44065 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0416 00:30:38.375812   44065 command_runner.go:130] > # 	"containers_oom_total",
	I0416 00:30:38.375817   44065 command_runner.go:130] > # 	"containers_oom",
	I0416 00:30:38.375826   44065 command_runner.go:130] > # 	"processes_defunct",
	I0416 00:30:38.375832   44065 command_runner.go:130] > # 	"operations_total",
	I0416 00:30:38.375836   44065 command_runner.go:130] > # 	"operations_latency_seconds",
	I0416 00:30:38.375841   44065 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0416 00:30:38.375847   44065 command_runner.go:130] > # 	"operations_errors_total",
	I0416 00:30:38.375851   44065 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0416 00:30:38.375855   44065 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0416 00:30:38.375861   44065 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0416 00:30:38.375866   44065 command_runner.go:130] > # 	"image_pulls_success_total",
	I0416 00:30:38.375877   44065 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0416 00:30:38.375884   44065 command_runner.go:130] > # 	"containers_oom_count_total",
	I0416 00:30:38.375889   44065 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0416 00:30:38.375895   44065 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0416 00:30:38.375898   44065 command_runner.go:130] > # ]
	I0416 00:30:38.375903   44065 command_runner.go:130] > # The port on which the metrics server will listen.
	I0416 00:30:38.375909   44065 command_runner.go:130] > # metrics_port = 9090
	I0416 00:30:38.375914   44065 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0416 00:30:38.375920   44065 command_runner.go:130] > # metrics_socket = ""
	I0416 00:30:38.375925   44065 command_runner.go:130] > # The certificate for the secure metrics server.
	I0416 00:30:38.375932   44065 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0416 00:30:38.375938   44065 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0416 00:30:38.375945   44065 command_runner.go:130] > # certificate on any modification event.
	I0416 00:30:38.375948   44065 command_runner.go:130] > # metrics_cert = ""
	I0416 00:30:38.375956   44065 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0416 00:30:38.375960   44065 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0416 00:30:38.375966   44065 command_runner.go:130] > # metrics_key = ""
	I0416 00:30:38.375972   44065 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0416 00:30:38.375976   44065 command_runner.go:130] > [crio.tracing]
	I0416 00:30:38.375982   44065 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0416 00:30:38.375988   44065 command_runner.go:130] > # enable_tracing = false
	I0416 00:30:38.375993   44065 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0416 00:30:38.375998   44065 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0416 00:30:38.376005   44065 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0416 00:30:38.376012   44065 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0416 00:30:38.376015   44065 command_runner.go:130] > # CRI-O NRI configuration.
	I0416 00:30:38.376022   44065 command_runner.go:130] > [crio.nri]
	I0416 00:30:38.376034   44065 command_runner.go:130] > # Globally enable or disable NRI.
	I0416 00:30:38.376045   44065 command_runner.go:130] > # enable_nri = false
	I0416 00:30:38.376051   44065 command_runner.go:130] > # NRI socket to listen on.
	I0416 00:30:38.376060   44065 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0416 00:30:38.376064   44065 command_runner.go:130] > # NRI plugin directory to use.
	I0416 00:30:38.376071   44065 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0416 00:30:38.376076   44065 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0416 00:30:38.376083   44065 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0416 00:30:38.376088   44065 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0416 00:30:38.376095   44065 command_runner.go:130] > # nri_disable_connections = false
	I0416 00:30:38.376100   44065 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0416 00:30:38.376106   44065 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0416 00:30:38.376111   44065 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0416 00:30:38.376117   44065 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0416 00:30:38.376122   44065 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0416 00:30:38.376129   44065 command_runner.go:130] > [crio.stats]
	I0416 00:30:38.376134   44065 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0416 00:30:38.376142   44065 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0416 00:30:38.376146   44065 command_runner.go:130] > # stats_collection_period = 0
	I0416 00:30:38.376273   44065 cni.go:84] Creating CNI manager for ""
	I0416 00:30:38.376287   44065 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0416 00:30:38.376298   44065 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 00:30:38.376318   44065 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.140 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-414194 NodeName:multinode-414194 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.140"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.140 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 00:30:38.376440   44065 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.140
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-414194"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.140
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.140"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 00:30:38.376507   44065 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 00:30:38.386620   44065 command_runner.go:130] > kubeadm
	I0416 00:30:38.386635   44065 command_runner.go:130] > kubectl
	I0416 00:30:38.386639   44065 command_runner.go:130] > kubelet
	I0416 00:30:38.386767   44065 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 00:30:38.386835   44065 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 00:30:38.396229   44065 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0416 00:30:38.414464   44065 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 00:30:38.432702   44065 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0416 00:30:38.450278   44065 ssh_runner.go:195] Run: grep 192.168.39.140	control-plane.minikube.internal$ /etc/hosts
	I0416 00:30:38.454389   44065 command_runner.go:130] > 192.168.39.140	control-plane.minikube.internal
	I0416 00:30:38.454444   44065 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:30:38.620821   44065 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 00:30:38.674666   44065 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194 for IP: 192.168.39.140
	I0416 00:30:38.674692   44065 certs.go:194] generating shared ca certs ...
	I0416 00:30:38.674713   44065 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:30:38.674896   44065 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 00:30:38.674957   44065 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 00:30:38.674973   44065 certs.go:256] generating profile certs ...
	I0416 00:30:38.675084   44065 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/client.key
	I0416 00:30:38.675158   44065 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/apiserver.key.94aff35d
	I0416 00:30:38.675216   44065 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/proxy-client.key
	I0416 00:30:38.675232   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0416 00:30:38.675250   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0416 00:30:38.675269   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0416 00:30:38.675287   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0416 00:30:38.675308   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0416 00:30:38.675328   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0416 00:30:38.675346   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0416 00:30:38.675366   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0416 00:30:38.675430   44065 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 00:30:38.675471   44065 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 00:30:38.675487   44065 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 00:30:38.675523   44065 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 00:30:38.675570   44065 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 00:30:38.675603   44065 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 00:30:38.675664   44065 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:30:38.675710   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:30:38.675732   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem -> /usr/share/ca-certificates/14897.pem
	I0416 00:30:38.675753   44065 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> /usr/share/ca-certificates/148972.pem
	I0416 00:30:38.676828   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 00:30:38.821905   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 00:30:38.937751   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 00:30:39.083031   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 00:30:39.301749   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 00:30:39.341550   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 00:30:39.556935   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 00:30:39.697846   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/multinode-414194/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 00:30:39.808528   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 00:30:39.860192   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 00:30:39.897801   44065 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 00:30:39.931139   44065 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 00:30:39.955270   44065 ssh_runner.go:195] Run: openssl version
	I0416 00:30:39.961275   44065 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0416 00:30:39.961640   44065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 00:30:39.979087   44065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 00:30:39.984030   44065 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 00:30:39.984408   44065 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 00:30:39.984481   44065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 00:30:39.992982   44065 command_runner.go:130] > 51391683
	I0416 00:30:39.993245   44065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 00:30:40.005200   44065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 00:30:40.023527   44065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 00:30:40.029113   44065 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 00:30:40.029211   44065 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 00:30:40.029275   44065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 00:30:40.035805   44065 command_runner.go:130] > 3ec20f2e
	I0416 00:30:40.035925   44065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 00:30:40.052261   44065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 00:30:40.066328   44065 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:30:40.071602   44065 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:30:40.071851   44065 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:30:40.071914   44065 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:30:40.080041   44065 command_runner.go:130] > b5213941
	I0416 00:30:40.080542   44065 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 00:30:40.095705   44065 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 00:30:40.100499   44065 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 00:30:40.100520   44065 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0416 00:30:40.100526   44065 command_runner.go:130] > Device: 253,1	Inode: 6292486     Links: 1
	I0416 00:30:40.100531   44065 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0416 00:30:40.100538   44065 command_runner.go:130] > Access: 2024-04-16 00:23:56.560356475 +0000
	I0416 00:30:40.100543   44065 command_runner.go:130] > Modify: 2024-04-16 00:23:56.560356475 +0000
	I0416 00:30:40.100548   44065 command_runner.go:130] > Change: 2024-04-16 00:23:56.560356475 +0000
	I0416 00:30:40.100553   44065 command_runner.go:130] >  Birth: 2024-04-16 00:23:56.560356475 +0000
	I0416 00:30:40.100708   44065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 00:30:40.108744   44065 command_runner.go:130] > Certificate will not expire
	I0416 00:30:40.108798   44065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 00:30:40.115937   44065 command_runner.go:130] > Certificate will not expire
	I0416 00:30:40.116230   44065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 00:30:40.123873   44065 command_runner.go:130] > Certificate will not expire
	I0416 00:30:40.124034   44065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 00:30:40.133196   44065 command_runner.go:130] > Certificate will not expire
	I0416 00:30:40.133414   44065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 00:30:40.146774   44065 command_runner.go:130] > Certificate will not expire
	I0416 00:30:40.146865   44065 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 00:30:40.153213   44065 command_runner.go:130] > Certificate will not expire
	I0416 00:30:40.153483   44065 kubeadm.go:391] StartCluster: {Name:multinode-414194 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.
3 ClusterName:multinode-414194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.140 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.81 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.64 Port:0 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:30:40.153629   44065 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 00:30:40.153703   44065 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:30:40.227844   44065 command_runner.go:130] > 0f17e92a363098fdeb203b5b88690b26d20ee0163b8c3b5931987e38331d6042
	I0416 00:30:40.227951   44065 command_runner.go:130] > cb28a7e68259cdb451f72bd51f7d647ab66e3b85ce31278e929148b4158bd23e
	I0416 00:30:40.228087   44065 command_runner.go:130] > 6c2cb5115dbfa806690a4c862b1d19a3e52fd71b6eb9e12beb5fb9dc63eaeb79
	I0416 00:30:40.228124   44065 command_runner.go:130] > 707d02997c5056b67d64184224cc9001aa6230ed5898761df3a5558f6da469b1
	I0416 00:30:40.228154   44065 command_runner.go:130] > c97fdfc017a2bc300f5a99f131f6ef95456d6c86023e424eff41cb8b65b77feb
	I0416 00:30:40.228245   44065 command_runner.go:130] > 5f7dc8a7b1688773e7beca7f7620ad25f1d7b0d0535e159594000283c4a92837
	I0416 00:30:40.228361   44065 command_runner.go:130] > d9a2b19294670872a4b4a12394a771b768c57888a157565dea93eb7cd78cebc2
	I0416 00:30:40.228474   44065 command_runner.go:130] > 9a7bc656135f0c8096e1b56fc42acf4f40bb68637358951d60e659d0460de027
	I0416 00:30:40.228621   44065 command_runner.go:130] > a2a1ac389d671ed6ed0bd3f9b99a93dd309a8a21ebd4aa3440f174b176391d24
	I0416 00:30:40.228640   44065 command_runner.go:130] > 16838f443cd34bb9609f80481078d18c59c1f868678f0f0d8d9a1e797a6d1c66
	I0416 00:30:40.228714   44065 command_runner.go:130] > 0e533992dbeeaa8b0a1310ebfd164115d6900369ae6f23f29a9c56bc79d8d3d2
	I0416 00:30:40.228775   44065 command_runner.go:130] > c5743d4076ffb9eb6579c059bfc0cea6f0d15c748843479fb19531a1f04b02a9
	I0416 00:30:40.228842   44065 command_runner.go:130] > f18d5f50d24d1c24cddbaa0a6de3faa8924dcb73302b59d862d487885e7e5cef
	I0416 00:30:40.228907   44065 command_runner.go:130] > 723a4dcdedcb2a36bdf5fc563d509e24ff5f25b28056bc8e19253f1fa6a5c380
	I0416 00:30:40.230530   44065 cri.go:89] found id: "0f17e92a363098fdeb203b5b88690b26d20ee0163b8c3b5931987e38331d6042"
	I0416 00:30:40.230548   44065 cri.go:89] found id: "cb28a7e68259cdb451f72bd51f7d647ab66e3b85ce31278e929148b4158bd23e"
	I0416 00:30:40.230554   44065 cri.go:89] found id: "6c2cb5115dbfa806690a4c862b1d19a3e52fd71b6eb9e12beb5fb9dc63eaeb79"
	I0416 00:30:40.230558   44065 cri.go:89] found id: "707d02997c5056b67d64184224cc9001aa6230ed5898761df3a5558f6da469b1"
	I0416 00:30:40.230562   44065 cri.go:89] found id: "c97fdfc017a2bc300f5a99f131f6ef95456d6c86023e424eff41cb8b65b77feb"
	I0416 00:30:40.230567   44065 cri.go:89] found id: "5f7dc8a7b1688773e7beca7f7620ad25f1d7b0d0535e159594000283c4a92837"
	I0416 00:30:40.230569   44065 cri.go:89] found id: "d9a2b19294670872a4b4a12394a771b768c57888a157565dea93eb7cd78cebc2"
	I0416 00:30:40.230572   44065 cri.go:89] found id: "9a7bc656135f0c8096e1b56fc42acf4f40bb68637358951d60e659d0460de027"
	I0416 00:30:40.230574   44065 cri.go:89] found id: "a2a1ac389d671ed6ed0bd3f9b99a93dd309a8a21ebd4aa3440f174b176391d24"
	I0416 00:30:40.230580   44065 cri.go:89] found id: "16838f443cd34bb9609f80481078d18c59c1f868678f0f0d8d9a1e797a6d1c66"
	I0416 00:30:40.230583   44065 cri.go:89] found id: "0e533992dbeeaa8b0a1310ebfd164115d6900369ae6f23f29a9c56bc79d8d3d2"
	I0416 00:30:40.230585   44065 cri.go:89] found id: "c5743d4076ffb9eb6579c059bfc0cea6f0d15c748843479fb19531a1f04b02a9"
	I0416 00:30:40.230587   44065 cri.go:89] found id: "f18d5f50d24d1c24cddbaa0a6de3faa8924dcb73302b59d862d487885e7e5cef"
	I0416 00:30:40.230590   44065 cri.go:89] found id: "723a4dcdedcb2a36bdf5fc563d509e24ff5f25b28056bc8e19253f1fa6a5c380"
	I0416 00:30:40.230596   44065 cri.go:89] found id: ""
	I0416 00:30:40.230642   44065 ssh_runner.go:195] Run: sudo runc list -f json
	I0416 00:30:40.251059   44065 command_runner.go:130] ! load container 1199477a5e1bde603f9196d87e5b3e814fd696a077091ca14528388a47c54a86: container does not exist
	I0416 00:30:40.256302   44065 command_runner.go:130] ! load container 43de4700108889fe0c8ba2929dcaaffb699c6a202ab55b2d2f1eb2f6c8113b6c: container does not exist
	
	
	==> CRI-O <==
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.207512649Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=75c7bf62-6718-4a67-8a68-7f92b600ae7e name=/runtime.v1.RuntimeService/Version
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.208628299Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8e33dff9-5d00-446c-9e31-774059f0f417 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.209452577Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713227698209422697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8e33dff9-5d00-446c-9e31-774059f0f417 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.210089614Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9ff79f59-3999-4ec0-b21b-d02d51a2a6bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.210169105Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9ff79f59-3999-4ec0-b21b-d02d51a2a6bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.210498249Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6ef352c32da886c4e9c9c5747fc9da3c5a81e603fcb3df68f6aa02300642476,PodSandboxId:b2689a6dd8047385bd87c5a6320af5a073e0afdb3b1208044d439fbbe75d2ef2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713227491267626158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c46886-a03a-43cd-a9bd-3ce8ea51f3ed,},Annotations:map[string]string{io.kubernetes.container.hash: 4312fd47,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2f88200d8877162cd5050d66172db422c8244e5ad841c7f87c4e6a9ff1d29b,PodSandboxId:fdb3b4294a4123ab28e7b284031daa4b10b98cad5ccf6fb0c11807cc42f0ffc5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713227476454420131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-sgkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b5fef9-7a2b-4e54-bda6-b721112d5496,},Annotations:map[string]string{io.kubernetes.container.hash: d2a6a9d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecdae904548f8af39104f23afa280987370e7c6de4405146fe74d4adc8ea2e,PodSandboxId:c57a4cde2125e1f4910e44b8e72f5386d73c63e8bb054f8880b9ef6e4f247aa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713227472562730811,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdbff0d-43d4-45f6-81e8-cbe13209d1a6,},Annotations:map[string]string{io.kubernetes.container.hash: 5fd9d0de,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2737380ae428288236a91d128cf8678a564ea5e5c92710aef92689fdb263dae0,PodSandboxId:41e9aff954221cb3bd05fe62c62e21e55c17f283ff35c51058f03fd3ddf0256e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713227472594525235,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rb5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4942567c-fbdf-4a3e-9b78-6ca67f7401c4,},Annotations:map[string]string{io.kubernetes.container.hash: d89b44bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},
{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d245241ef5b83c185c6d37a3b77ec510f66d9cbfc8b3ee28ab93535d56219874,PodSandboxId:b2689a6dd8047385bd87c5a6320af5a073e0afdb3b1208044d439fbbe75d2ef2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713227472579875266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c46886-a03a-43cd-a9bd-3ce8ea51f3ed,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 4312fd47,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c4663335b279485a3ebb281532e482a104e080a9284d76a648c00269233cbb,PodSandboxId:f66a04b4ba4d9655e27ad60cba19d92374399e9f7f8b3bc2074387f858473d2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713227468896229972,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1264c74175197ca6cd421c033473ff23,},Annotations:map[string]strin
g{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb348f36dc509e79515a270e877a7510e598d118a29cb47ef41895788b779c8,PodSandboxId:2bf3ff189bfb483f8f1846af0e06eca5ae337196231ce061c678e23b1a30dc0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713227468893648820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de5ba43fcd25edd8391f4b2e93c4b09,},Annotations:map
[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f865fc38c7e05fae5207cd27288c53bd9b1f203382b0f691dc6e05c6b7b3ab17,PodSandboxId:70a7acf6626dd5c8242ba73fd5650c263b80507bc5368214d5f5488aaba486a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713227468878313700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ae75e3b41ddee4e55353a2f6260637,},Annotations:map[string]string
{io.kubernetes.container.hash: 1bf99c3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250f4d2b523d67aa310faaa736ac218abede0483f0090eec335a62d5d91e8010,PodSandboxId:e78ae9dc879b0a4e92faf3727d6a9a9fb6ff95a4ced62c0a82cd9e34aaa232b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713227468866033920,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d136b84a5ace080d7204bad5b555f4,},Annotations:map[string]string{io.kubernetes.container.hash: 37eb4e31,io.k
ubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ff4e26cfc61b8d5b0260e9eb83770cbc91236f8244f5ae0b56390d752d1241,PodSandboxId:35e4d91aed1d325efb701a63d12ea24ef5c1b1c236a5f73eec20549bd1e431de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713227462940440316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkn5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7877a83-35ca-4241-a233-283ab4a3e4ae,},Annotations:map[string]string{io.kubernetes.container.hash: e10c48ce,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1199477a5e1bde603f9196d87e5b3e814fd696a077091ca14528388a47c54a86,PodSandboxId:35e4d91aed1d325efb701a63d12ea24ef5c1b1c236a5f73eec20549bd1e431de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713227439754387424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkn5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7877a83-35ca-4241-a233-283ab4a3e4ae,},Annotations:map[string]string{io.kubernetes.container.hash: e10c48ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f17e92a363098fdeb203b5b88690b26d20ee0163b8c3b5931987e38331d6042,PodSandboxId:41e9aff954221cb3bd05fe62c62e21e55c17f283ff35c51058f03fd3ddf0256e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713227439405984513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rb5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4942567c-fbdf-4a3e-9b78-6ca67f7401c4,},Annotations:map[string]string{io.kubernetes.container.hash: d89b44bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb28a7e68259cdb451f72bd51f7d647ab66e3b85ce31278e929148b4158bd23e,PodSandboxId:c57a4cde2125e1f4910e44b8e72f5386d73c63e8bb054f8880b9ef6e4f247aa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713227439300174812,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdbff0d-43d4-45f6-8
1e8-cbe13209d1a6,},Annotations:map[string]string{io.kubernetes.container.hash: 5fd9d0de,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2cb5115dbfa806690a4c862b1d19a3e52fd71b6eb9e12beb5fb9dc63eaeb79,PodSandboxId:70a7acf6626dd5c8242ba73fd5650c263b80507bc5368214d5f5488aaba486a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713227439184602672,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ae75e3b41ddee4e55353a2f626063
7,},Annotations:map[string]string{io.kubernetes.container.hash: 1bf99c3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707d02997c5056b67d64184224cc9001aa6230ed5898761df3a5558f6da469b1,PodSandboxId:f66a04b4ba4d9655e27ad60cba19d92374399e9f7f8b3bc2074387f858473d2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1713227439179994867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1264c74175197ca6cd421c033473ff23,},Annotations
:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97fdfc017a2bc300f5a99f131f6ef95456d6c86023e424eff41cb8b65b77feb,PodSandboxId:e78ae9dc879b0a4e92faf3727d6a9a9fb6ff95a4ced62c0a82cd9e34aaa232b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713227439101595370,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d136b84a5ace080d7204bad5b555f4,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 37eb4e31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7dc8a7b1688773e7beca7f7620ad25f1d7b0d0535e159594000283c4a92837,PodSandboxId:2bf3ff189bfb483f8f1846af0e06eca5ae337196231ce061c678e23b1a30dc0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713227438920524155,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de5ba43fcd25edd8391f4b2e93c4b09,},Annotations:map[string]string{io.kubernetes.
container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a3f1cb13bdbfa0468c8ad861c695760fe525774e8f1fc5eb153b77f3b4e350,PodSandboxId:9384fbc0adcf7e21b22fd218db6be959af37dca2db42c105f595c0147322d1d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713227138698384937,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-sgkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b5fef9-7a2b-4e54-bda6-b721112d5496,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d2a6a9d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9ff79f59-3999-4ec0-b21b-d02d51a2a6bf name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.249575279Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f90d2a3-0341-4dee-9183-1ed9a31f9316 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.249877066Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:fdb3b4294a4123ab28e7b284031daa4b10b98cad5ccf6fb0c11807cc42f0ffc5,Metadata:&PodSandboxMetadata{Name:busybox-7fdf7869d9-sgkx5,Uid:00b5fef9-7a2b-4e54-bda6-b721112d5496,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713227476326528678,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7fdf7869d9-sgkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b5fef9-7a2b-4e54-bda6-b721112d5496,pod-template-hash: 7fdf7869d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T00:31:12.220972535Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:41e9aff954221cb3bd05fe62c62e21e55c17f283ff35c51058f03fd3ddf0256e,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-rb5mm,Uid:4942567c-fbdf-4a3e-9b78-6ca67f7401c4,Namespace:kube-system,Attempt:
2,},State:SANDBOX_READY,CreatedAt:1713227438769697865,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-rb5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4942567c-fbdf-4a3e-9b78-6ca67f7401c4,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T00:24:51.625307002Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f66a04b4ba4d9655e27ad60cba19d92374399e9f7f8b3bc2074387f858473d2e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-414194,Uid:1264c74175197ca6cd421c033473ff23,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713227438720618548,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1264c74175197ca6cd421c033473ff23,tier: control-plane,},Annotations:map[string]string{kubernetes.io
/config.hash: 1264c74175197ca6cd421c033473ff23,kubernetes.io/config.seen: 2024-04-16T00:24:06.886136112Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:70a7acf6626dd5c8242ba73fd5650c263b80507bc5368214d5f5488aaba486a5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-414194,Uid:07ae75e3b41ddee4e55353a2f6260637,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713227438714247910,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ae75e3b41ddee4e55353a2f6260637,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.140:8443,kubernetes.io/config.hash: 07ae75e3b41ddee4e55353a2f6260637,kubernetes.io/config.seen: 2024-04-16T00:24:06.886138219Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b2689a6dd8047385bd87c5a6320af5a073e0
afdb3b1208044d439fbbe75d2ef2,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:85c46886-a03a-43cd-a9bd-3ce8ea51f3ed,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713227438713383689,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c46886-a03a-43cd-a9bd-3ce8ea51f3ed,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMou
nts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-16T00:24:51.629696418Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:35e4d91aed1d325efb701a63d12ea24ef5c1b1c236a5f73eec20549bd1e431de,Metadata:&PodSandboxMetadata{Name:kube-proxy-pkn5q,Uid:c7877a83-35ca-4241-a233-283ab4a3e4ae,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713227438682745345,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-pkn5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7877a83-35ca-4241-a233-283ab4a3e4ae,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T00:24:19.560197666Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e
78ae9dc879b0a4e92faf3727d6a9a9fb6ff95a4ced62c0a82cd9e34aaa232b1,Metadata:&PodSandboxMetadata{Name:etcd-multinode-414194,Uid:26d136b84a5ace080d7204bad5b555f4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713227438644477125,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d136b84a5ace080d7204bad5b555f4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.140:2379,kubernetes.io/config.hash: 26d136b84a5ace080d7204bad5b555f4,kubernetes.io/config.seen: 2024-04-16T00:24:06.886137160Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c57a4cde2125e1f4910e44b8e72f5386d73c63e8bb054f8880b9ef6e4f247aa8,Metadata:&PodSandboxMetadata{Name:kindnet-pd9pv,Uid:bcdbff0d-43d4-45f6-81e8-cbe13209d1a6,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713227438634216439,Labels:map
[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-pd9pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdbff0d-43d4-45f6-81e8-cbe13209d1a6,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T00:24:19.536972381Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2bf3ff189bfb483f8f1846af0e06eca5ae337196231ce061c678e23b1a30dc0e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-414194,Uid:4de5ba43fcd25edd8391f4b2e93c4b09,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713227438614437882,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de5ba43fcd25edd8391f4b2e93c4b09,tier: control-plane,},Annotations:map[string]string{kuber
netes.io/config.hash: 4de5ba43fcd25edd8391f4b2e93c4b09,kubernetes.io/config.seen: 2024-04-16T00:24:06.886132511Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4f90d2a3-0341-4dee-9183-1ed9a31f9316 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.250516063Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ba59323-02c1-4c1b-aa4f-9d3022ad63ed name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.250576935Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ba59323-02c1-4c1b-aa4f-9d3022ad63ed name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.250765187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6ef352c32da886c4e9c9c5747fc9da3c5a81e603fcb3df68f6aa02300642476,PodSandboxId:b2689a6dd8047385bd87c5a6320af5a073e0afdb3b1208044d439fbbe75d2ef2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713227491267626158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c46886-a03a-43cd-a9bd-3ce8ea51f3ed,},Annotations:map[string]string{io.kubernetes.container.hash: 4312fd47,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2f88200d8877162cd5050d66172db422c8244e5ad841c7f87c4e6a9ff1d29b,PodSandboxId:fdb3b4294a4123ab28e7b284031daa4b10b98cad5ccf6fb0c11807cc42f0ffc5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713227476454420131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-sgkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b5fef9-7a2b-4e54-bda6-b721112d5496,},Annotations:map[string]string{io.kubernetes.container.hash: d2a6a9d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecdae904548f8af39104f23afa280987370e7c6de4405146fe74d4adc8ea2e,PodSandboxId:c57a4cde2125e1f4910e44b8e72f5386d73c63e8bb054f8880b9ef6e4f247aa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713227472562730811,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdbff0d-43d4-45f6-81e8-cbe13209d1a6,},Annotations:map[string]string{io.kubernetes.container.hash: 5fd9d0de,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2737380ae428288236a91d128cf8678a564ea5e5c92710aef92689fdb263dae0,PodSandboxId:41e9aff954221cb3bd05fe62c62e21e55c17f283ff35c51058f03fd3ddf0256e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713227472594525235,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rb5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4942567c-fbdf-4a3e-9b78-6ca67f7401c4,},Annotations:map[string]string{io.kubernetes.container.hash: d89b44bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},
{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c4663335b279485a3ebb281532e482a104e080a9284d76a648c00269233cbb,PodSandboxId:f66a04b4ba4d9655e27ad60cba19d92374399e9f7f8b3bc2074387f858473d2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713227468896229972,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1264c74175197ca6cd421c033473ff23,},Annotations:map
[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb348f36dc509e79515a270e877a7510e598d118a29cb47ef41895788b779c8,PodSandboxId:2bf3ff189bfb483f8f1846af0e06eca5ae337196231ce061c678e23b1a30dc0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713227468893648820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de5ba43fcd25edd8391f4b2e93c4b09,},An
notations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f865fc38c7e05fae5207cd27288c53bd9b1f203382b0f691dc6e05c6b7b3ab17,PodSandboxId:70a7acf6626dd5c8242ba73fd5650c263b80507bc5368214d5f5488aaba486a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713227468878313700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ae75e3b41ddee4e55353a2f6260637,},Annotations:map[
string]string{io.kubernetes.container.hash: 1bf99c3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250f4d2b523d67aa310faaa736ac218abede0483f0090eec335a62d5d91e8010,PodSandboxId:e78ae9dc879b0a4e92faf3727d6a9a9fb6ff95a4ced62c0a82cd9e34aaa232b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713227468866033920,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d136b84a5ace080d7204bad5b555f4,},Annotations:map[string]string{io.kubernetes.container.hash:
37eb4e31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ff4e26cfc61b8d5b0260e9eb83770cbc91236f8244f5ae0b56390d752d1241,PodSandboxId:35e4d91aed1d325efb701a63d12ea24ef5c1b1c236a5f73eec20549bd1e431de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713227462940440316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkn5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7877a83-35ca-4241-a233-283ab4a3e4ae,},Annotations:map[string]string{io.kubernetes.container.hash: e10c48ce,io.kubernetes.container.
restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4ba59323-02c1-4c1b-aa4f-9d3022ad63ed name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.259899197Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97590564-fa15-4a23-9882-b50292b8498c name=/runtime.v1.RuntimeService/Version
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.259988094Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97590564-fa15-4a23-9882-b50292b8498c name=/runtime.v1.RuntimeService/Version
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.261054145Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af3c6958-7814-4e56-83d3-369191c5ee33 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.261429900Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713227698261410350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af3c6958-7814-4e56-83d3-369191c5ee33 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.262100168Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=629afa32-fb0a-4448-8683-b8255e9b4001 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.262153095Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=629afa32-fb0a-4448-8683-b8255e9b4001 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.262493780Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6ef352c32da886c4e9c9c5747fc9da3c5a81e603fcb3df68f6aa02300642476,PodSandboxId:b2689a6dd8047385bd87c5a6320af5a073e0afdb3b1208044d439fbbe75d2ef2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713227491267626158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c46886-a03a-43cd-a9bd-3ce8ea51f3ed,},Annotations:map[string]string{io.kubernetes.container.hash: 4312fd47,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2f88200d8877162cd5050d66172db422c8244e5ad841c7f87c4e6a9ff1d29b,PodSandboxId:fdb3b4294a4123ab28e7b284031daa4b10b98cad5ccf6fb0c11807cc42f0ffc5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713227476454420131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-sgkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b5fef9-7a2b-4e54-bda6-b721112d5496,},Annotations:map[string]string{io.kubernetes.container.hash: d2a6a9d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecdae904548f8af39104f23afa280987370e7c6de4405146fe74d4adc8ea2e,PodSandboxId:c57a4cde2125e1f4910e44b8e72f5386d73c63e8bb054f8880b9ef6e4f247aa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713227472562730811,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdbff0d-43d4-45f6-81e8-cbe13209d1a6,},Annotations:map[string]string{io.kubernetes.container.hash: 5fd9d0de,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2737380ae428288236a91d128cf8678a564ea5e5c92710aef92689fdb263dae0,PodSandboxId:41e9aff954221cb3bd05fe62c62e21e55c17f283ff35c51058f03fd3ddf0256e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713227472594525235,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rb5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4942567c-fbdf-4a3e-9b78-6ca67f7401c4,},Annotations:map[string]string{io.kubernetes.container.hash: d89b44bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},
{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d245241ef5b83c185c6d37a3b77ec510f66d9cbfc8b3ee28ab93535d56219874,PodSandboxId:b2689a6dd8047385bd87c5a6320af5a073e0afdb3b1208044d439fbbe75d2ef2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713227472579875266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c46886-a03a-43cd-a9bd-3ce8ea51f3ed,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 4312fd47,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c4663335b279485a3ebb281532e482a104e080a9284d76a648c00269233cbb,PodSandboxId:f66a04b4ba4d9655e27ad60cba19d92374399e9f7f8b3bc2074387f858473d2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713227468896229972,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1264c74175197ca6cd421c033473ff23,},Annotations:map[string]strin
g{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb348f36dc509e79515a270e877a7510e598d118a29cb47ef41895788b779c8,PodSandboxId:2bf3ff189bfb483f8f1846af0e06eca5ae337196231ce061c678e23b1a30dc0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713227468893648820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de5ba43fcd25edd8391f4b2e93c4b09,},Annotations:map
[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f865fc38c7e05fae5207cd27288c53bd9b1f203382b0f691dc6e05c6b7b3ab17,PodSandboxId:70a7acf6626dd5c8242ba73fd5650c263b80507bc5368214d5f5488aaba486a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713227468878313700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ae75e3b41ddee4e55353a2f6260637,},Annotations:map[string]string
{io.kubernetes.container.hash: 1bf99c3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250f4d2b523d67aa310faaa736ac218abede0483f0090eec335a62d5d91e8010,PodSandboxId:e78ae9dc879b0a4e92faf3727d6a9a9fb6ff95a4ced62c0a82cd9e34aaa232b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713227468866033920,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d136b84a5ace080d7204bad5b555f4,},Annotations:map[string]string{io.kubernetes.container.hash: 37eb4e31,io.k
ubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ff4e26cfc61b8d5b0260e9eb83770cbc91236f8244f5ae0b56390d752d1241,PodSandboxId:35e4d91aed1d325efb701a63d12ea24ef5c1b1c236a5f73eec20549bd1e431de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713227462940440316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkn5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7877a83-35ca-4241-a233-283ab4a3e4ae,},Annotations:map[string]string{io.kubernetes.container.hash: e10c48ce,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1199477a5e1bde603f9196d87e5b3e814fd696a077091ca14528388a47c54a86,PodSandboxId:35e4d91aed1d325efb701a63d12ea24ef5c1b1c236a5f73eec20549bd1e431de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713227439754387424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkn5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7877a83-35ca-4241-a233-283ab4a3e4ae,},Annotations:map[string]string{io.kubernetes.container.hash: e10c48ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f17e92a363098fdeb203b5b88690b26d20ee0163b8c3b5931987e38331d6042,PodSandboxId:41e9aff954221cb3bd05fe62c62e21e55c17f283ff35c51058f03fd3ddf0256e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713227439405984513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rb5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4942567c-fbdf-4a3e-9b78-6ca67f7401c4,},Annotations:map[string]string{io.kubernetes.container.hash: d89b44bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb28a7e68259cdb451f72bd51f7d647ab66e3b85ce31278e929148b4158bd23e,PodSandboxId:c57a4cde2125e1f4910e44b8e72f5386d73c63e8bb054f8880b9ef6e4f247aa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713227439300174812,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdbff0d-43d4-45f6-8
1e8-cbe13209d1a6,},Annotations:map[string]string{io.kubernetes.container.hash: 5fd9d0de,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2cb5115dbfa806690a4c862b1d19a3e52fd71b6eb9e12beb5fb9dc63eaeb79,PodSandboxId:70a7acf6626dd5c8242ba73fd5650c263b80507bc5368214d5f5488aaba486a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713227439184602672,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ae75e3b41ddee4e55353a2f626063
7,},Annotations:map[string]string{io.kubernetes.container.hash: 1bf99c3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707d02997c5056b67d64184224cc9001aa6230ed5898761df3a5558f6da469b1,PodSandboxId:f66a04b4ba4d9655e27ad60cba19d92374399e9f7f8b3bc2074387f858473d2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1713227439179994867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1264c74175197ca6cd421c033473ff23,},Annotations
:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97fdfc017a2bc300f5a99f131f6ef95456d6c86023e424eff41cb8b65b77feb,PodSandboxId:e78ae9dc879b0a4e92faf3727d6a9a9fb6ff95a4ced62c0a82cd9e34aaa232b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713227439101595370,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d136b84a5ace080d7204bad5b555f4,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 37eb4e31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7dc8a7b1688773e7beca7f7620ad25f1d7b0d0535e159594000283c4a92837,PodSandboxId:2bf3ff189bfb483f8f1846af0e06eca5ae337196231ce061c678e23b1a30dc0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713227438920524155,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de5ba43fcd25edd8391f4b2e93c4b09,},Annotations:map[string]string{io.kubernetes.
container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a3f1cb13bdbfa0468c8ad861c695760fe525774e8f1fc5eb153b77f3b4e350,PodSandboxId:9384fbc0adcf7e21b22fd218db6be959af37dca2db42c105f595c0147322d1d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713227138698384937,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-sgkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b5fef9-7a2b-4e54-bda6-b721112d5496,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d2a6a9d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=629afa32-fb0a-4448-8683-b8255e9b4001 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.309659792Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1f8da573-56d9-461f-8196-2a3a9e85e690 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.309736462Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1f8da573-56d9-461f-8196-2a3a9e85e690 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.311059081Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bf8567f0-350b-4f5d-825c-96f62f272c88 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.311448989Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713227698311427572,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130111,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf8567f0-350b-4f5d-825c-96f62f272c88 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.312154146Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55ae34b7-dd30-43a7-af15-ec900e42efd7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.312213648Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55ae34b7-dd30-43a7-af15-ec900e42efd7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:34:58 multinode-414194 crio[2930]: time="2024-04-16 00:34:58.312537797Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6ef352c32da886c4e9c9c5747fc9da3c5a81e603fcb3df68f6aa02300642476,PodSandboxId:b2689a6dd8047385bd87c5a6320af5a073e0afdb3b1208044d439fbbe75d2ef2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713227491267626158,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c46886-a03a-43cd-a9bd-3ce8ea51f3ed,},Annotations:map[string]string{io.kubernetes.container.hash: 4312fd47,io.kubernetes.container.restartCount: 4,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f2f88200d8877162cd5050d66172db422c8244e5ad841c7f87c4e6a9ff1d29b,PodSandboxId:fdb3b4294a4123ab28e7b284031daa4b10b98cad5ccf6fb0c11807cc42f0ffc5,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1713227476454420131,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-sgkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b5fef9-7a2b-4e54-bda6-b721112d5496,},Annotations:map[string]string{io.kubernetes.container.hash: d2a6a9d1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08ecdae904548f8af39104f23afa280987370e7c6de4405146fe74d4adc8ea2e,PodSandboxId:c57a4cde2125e1f4910e44b8e72f5386d73c63e8bb054f8880b9ef6e4f247aa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1713227472562730811,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdbff0d-43d4-45f6-81e8-cbe13209d1a6,},Annotations:map[string]string{io.kubernetes.container.hash: 5fd9d0de,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2737380ae428288236a91d128cf8678a564ea5e5c92710aef92689fdb263dae0,PodSandboxId:41e9aff954221cb3bd05fe62c62e21e55c17f283ff35c51058f03fd3ddf0256e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713227472594525235,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rb5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4942567c-fbdf-4a3e-9b78-6ca67f7401c4,},Annotations:map[string]string{io.kubernetes.container.hash: d89b44bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},
{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d245241ef5b83c185c6d37a3b77ec510f66d9cbfc8b3ee28ab93535d56219874,PodSandboxId:b2689a6dd8047385bd87c5a6320af5a073e0afdb3b1208044d439fbbe75d2ef2,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713227472579875266,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85c46886-a03a-43cd-a9bd-3ce8ea51f3ed,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 4312fd47,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c4663335b279485a3ebb281532e482a104e080a9284d76a648c00269233cbb,PodSandboxId:f66a04b4ba4d9655e27ad60cba19d92374399e9f7f8b3bc2074387f858473d2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713227468896229972,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1264c74175197ca6cd421c033473ff23,},Annotations:map[string]strin
g{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0eb348f36dc509e79515a270e877a7510e598d118a29cb47ef41895788b779c8,PodSandboxId:2bf3ff189bfb483f8f1846af0e06eca5ae337196231ce061c678e23b1a30dc0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713227468893648820,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de5ba43fcd25edd8391f4b2e93c4b09,},Annotations:map
[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f865fc38c7e05fae5207cd27288c53bd9b1f203382b0f691dc6e05c6b7b3ab17,PodSandboxId:70a7acf6626dd5c8242ba73fd5650c263b80507bc5368214d5f5488aaba486a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713227468878313700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ae75e3b41ddee4e55353a2f6260637,},Annotations:map[string]string
{io.kubernetes.container.hash: 1bf99c3c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:250f4d2b523d67aa310faaa736ac218abede0483f0090eec335a62d5d91e8010,PodSandboxId:e78ae9dc879b0a4e92faf3727d6a9a9fb6ff95a4ced62c0a82cd9e34aaa232b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713227468866033920,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d136b84a5ace080d7204bad5b555f4,},Annotations:map[string]string{io.kubernetes.container.hash: 37eb4e31,io.k
ubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ff4e26cfc61b8d5b0260e9eb83770cbc91236f8244f5ae0b56390d752d1241,PodSandboxId:35e4d91aed1d325efb701a63d12ea24ef5c1b1c236a5f73eec20549bd1e431de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713227462940440316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkn5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7877a83-35ca-4241-a233-283ab4a3e4ae,},Annotations:map[string]string{io.kubernetes.container.hash: e10c48ce,io.kubernetes.container.restartCount:
2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1199477a5e1bde603f9196d87e5b3e814fd696a077091ca14528388a47c54a86,PodSandboxId:35e4d91aed1d325efb701a63d12ea24ef5c1b1c236a5f73eec20549bd1e431de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_EXITED,CreatedAt:1713227439754387424,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-pkn5q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7877a83-35ca-4241-a233-283ab4a3e4ae,},Annotations:map[string]string{io.kubernetes.container.hash: e10c48ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f17e92a363098fdeb203b5b88690b26d20ee0163b8c3b5931987e38331d6042,PodSandboxId:41e9aff954221cb3bd05fe62c62e21e55c17f283ff35c51058f03fd3ddf0256e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713227439405984513,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rb5mm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4942567c-fbdf-4a3e-9b78-6ca67f7401c4,},Annotations:map[string]string{io.kubernetes.container.hash: d89b44bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":
\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb28a7e68259cdb451f72bd51f7d647ab66e3b85ce31278e929148b4158bd23e,PodSandboxId:c57a4cde2125e1f4910e44b8e72f5386d73c63e8bb054f8880b9ef6e4f247aa8,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1713227439300174812,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-pd9pv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcdbff0d-43d4-45f6-8
1e8-cbe13209d1a6,},Annotations:map[string]string{io.kubernetes.container.hash: 5fd9d0de,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c2cb5115dbfa806690a4c862b1d19a3e52fd71b6eb9e12beb5fb9dc63eaeb79,PodSandboxId:70a7acf6626dd5c8242ba73fd5650c263b80507bc5368214d5f5488aaba486a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713227439184602672,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 07ae75e3b41ddee4e55353a2f626063
7,},Annotations:map[string]string{io.kubernetes.container.hash: 1bf99c3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:707d02997c5056b67d64184224cc9001aa6230ed5898761df3a5558f6da469b1,PodSandboxId:f66a04b4ba4d9655e27ad60cba19d92374399e9f7f8b3bc2074387f858473d2e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_EXITED,CreatedAt:1713227439179994867,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1264c74175197ca6cd421c033473ff23,},Annotations
:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c97fdfc017a2bc300f5a99f131f6ef95456d6c86023e424eff41cb8b65b77feb,PodSandboxId:e78ae9dc879b0a4e92faf3727d6a9a9fb6ff95a4ced62c0a82cd9e34aaa232b1,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713227439101595370,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26d136b84a5ace080d7204bad5b555f4,},Annotations:map[string]string{io.kubernetes.container.ha
sh: 37eb4e31,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f7dc8a7b1688773e7beca7f7620ad25f1d7b0d0535e159594000283c4a92837,PodSandboxId:2bf3ff189bfb483f8f1846af0e06eca5ae337196231ce061c678e23b1a30dc0e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_EXITED,CreatedAt:1713227438920524155,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-414194,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4de5ba43fcd25edd8391f4b2e93c4b09,},Annotations:map[string]string{io.kubernetes.
container.hash: 2d2557ee,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38a3f1cb13bdbfa0468c8ad861c695760fe525774e8f1fc5eb153b77f3b4e350,PodSandboxId:9384fbc0adcf7e21b22fd218db6be959af37dca2db42c105f595c0147322d1d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1713227138698384937,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7fdf7869d9-sgkx5,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 00b5fef9-7a2b-4e54-bda6-b721112d5496,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d2a6a9d1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55ae34b7-dd30-43a7-af15-ec900e42efd7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d6ef352c32da8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   b2689a6dd8047       storage-provisioner
	0f2f88200d887       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   fdb3b4294a412       busybox-7fdf7869d9-sgkx5
	2737380ae4282       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   2                   41e9aff954221       coredns-76f75df574-rb5mm
	d245241ef5b83       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       3                   b2689a6dd8047       storage-provisioner
	08ecdae904548       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               2                   c57a4cde2125e       kindnet-pd9pv
	c8c4663335b27       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      3 minutes ago       Running             kube-scheduler            2                   f66a04b4ba4d9       kube-scheduler-multinode-414194
	0eb348f36dc50       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      3 minutes ago       Running             kube-controller-manager   2                   2bf3ff189bfb4       kube-controller-manager-multinode-414194
	f865fc38c7e05       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      3 minutes ago       Running             kube-apiserver            2                   70a7acf6626dd       kube-apiserver-multinode-414194
	250f4d2b523d6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      2                   e78ae9dc879b0       etcd-multinode-414194
	c0ff4e26cfc61       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      3 minutes ago       Running             kube-proxy                2                   35e4d91aed1d3       kube-proxy-pkn5q
	1199477a5e1bd       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392                                      4 minutes ago       Exited              kube-proxy                1                   35e4d91aed1d3       kube-proxy-pkn5q
	0f17e92a36309       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Exited              coredns                   1                   41e9aff954221       coredns-76f75df574-rb5mm
	cb28a7e68259c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Exited              kindnet-cni               1                   c57a4cde2125e       kindnet-pd9pv
	6c2cb5115dbfa       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533                                      4 minutes ago       Exited              kube-apiserver            1                   70a7acf6626dd       kube-apiserver-multinode-414194
	707d02997c505       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b                                      4 minutes ago       Exited              kube-scheduler            1                   f66a04b4ba4d9       kube-scheduler-multinode-414194
	c97fdfc017a2b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Exited              etcd                      1                   e78ae9dc879b0       etcd-multinode-414194
	5f7dc8a7b1688       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3                                      4 minutes ago       Exited              kube-controller-manager   1                   2bf3ff189bfb4       kube-controller-manager-multinode-414194
	38a3f1cb13bdb       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   9384fbc0adcf7       busybox-7fdf7869d9-sgkx5
	
	
	==> coredns [0f17e92a363098fdeb203b5b88690b26d20ee0163b8c3b5931987e38331d6042] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:36259 - 39753 "HINFO IN 1965056522265378900.7264945599064591086. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010955615s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [2737380ae428288236a91d128cf8678a564ea5e5c92710aef92689fdb263dae0] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40690 - 19952 "HINFO IN 8961242322719970751.7987005008998165502. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009987741s
	
	
	==> describe nodes <==
	Name:               multinode-414194
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-414194
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=multinode-414194
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T00_24_07_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 00:24:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-414194
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:34:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 00:31:12 +0000   Tue, 16 Apr 2024 00:24:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 00:31:12 +0000   Tue, 16 Apr 2024 00:24:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 00:31:12 +0000   Tue, 16 Apr 2024 00:24:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 00:31:12 +0000   Tue, 16 Apr 2024 00:24:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.140
	  Hostname:    multinode-414194
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2305a4e47b6400c858ef4918c3f5b61
	  System UUID:                f2305a4e-47b6-400c-858e-f4918c3f5b61
	  Boot ID:                    4e6f970c-ffda-4afc-9055-e66a25cd3b8c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-sgkx5                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 coredns-76f75df574-rb5mm                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-414194                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-pd9pv                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-414194             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-414194    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-pkn5q                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-414194             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 3m46s                  kube-proxy       
	  Normal   Starting                 4m15s                  kube-proxy       
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node multinode-414194 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node multinode-414194 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node multinode-414194 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node multinode-414194 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node multinode-414194 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node multinode-414194 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node multinode-414194 event: Registered Node multinode-414194 in Controller
	  Normal   NodeReady                10m                    kubelet          Node multinode-414194 status is now: NodeReady
	  Warning  ContainerGCFailed        4m52s (x2 over 5m52s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m3s                   node-controller  Node multinode-414194 event: Registered Node multinode-414194 in Controller
	  Normal   Starting                 3m50s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m50s (x8 over 3m50s)  kubelet          Node multinode-414194 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m50s (x8 over 3m50s)  kubelet          Node multinode-414194 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m50s (x7 over 3m50s)  kubelet          Node multinode-414194 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           3m33s                  node-controller  Node multinode-414194 event: Registered Node multinode-414194 in Controller
	
	
	Name:               multinode-414194-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-414194-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=multinode-414194
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_16T00_31_55_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 00:31:54 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-414194-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:32:35 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 16 Apr 2024 00:32:24 +0000   Tue, 16 Apr 2024 00:33:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 16 Apr 2024 00:32:24 +0000   Tue, 16 Apr 2024 00:33:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 16 Apr 2024 00:32:24 +0000   Tue, 16 Apr 2024 00:33:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 16 Apr 2024 00:32:24 +0000   Tue, 16 Apr 2024 00:33:15 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.81
	  Hostname:    multinode-414194-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 768cfe7bcb95478abb083e0d24846f92
	  System UUID:                768cfe7b-cb95-478a-bb08-3e0d24846f92
	  Boot ID:                    462b099d-7b24-43c9-ac3e-cf002e7a4151
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7fdf7869d9-b9fgh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  kube-system                 kindnet-pcwvx               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m35s
	  kube-system                 kube-proxy-2qhl9            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m30s                  kube-proxy       
	  Normal  Starting                 2m59s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m35s (x2 over 9m35s)  kubelet          Node multinode-414194-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m35s (x2 over 9m35s)  kubelet          Node multinode-414194-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m35s (x2 over 9m35s)  kubelet          Node multinode-414194-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m25s                  kubelet          Node multinode-414194-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m4s (x2 over 3m4s)    kubelet          Node multinode-414194-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x2 over 3m4s)    kubelet          Node multinode-414194-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x2 over 3m4s)    kubelet          Node multinode-414194-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m55s                  kubelet          Node multinode-414194-m02 status is now: NodeReady
	  Normal  NodeNotReady             103s                   node-controller  Node multinode-414194-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.183691] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.105896] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.272747] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.263584] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +4.760405] systemd-fstab-generator[948]: Ignoring "noauto" option for root device
	[  +0.062292] kauditd_printk_skb: 158 callbacks suppressed
	[Apr16 00:24] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.075871] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.011628] systemd-fstab-generator[1479]: Ignoring "noauto" option for root device
	[  +0.128979] kauditd_printk_skb: 21 callbacks suppressed
	[ +32.443726] kauditd_printk_skb: 60 callbacks suppressed
	[Apr16 00:25] kauditd_printk_skb: 12 callbacks suppressed
	[Apr16 00:30] systemd-fstab-generator[2780]: Ignoring "noauto" option for root device
	[  +0.145497] systemd-fstab-generator[2792]: Ignoring "noauto" option for root device
	[  +0.201270] systemd-fstab-generator[2823]: Ignoring "noauto" option for root device
	[  +0.183819] systemd-fstab-generator[2887]: Ignoring "noauto" option for root device
	[  +0.288437] systemd-fstab-generator[2915]: Ignoring "noauto" option for root device
	[  +0.763249] systemd-fstab-generator[3024]: Ignoring "noauto" option for root device
	[  +5.050916] kauditd_printk_skb: 207 callbacks suppressed
	[Apr16 00:31] systemd-fstab-generator[4185]: Ignoring "noauto" option for root device
	[  +0.088423] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.305460] kauditd_printk_skb: 62 callbacks suppressed
	[ +12.332318] kauditd_printk_skb: 3 callbacks suppressed
	[  +2.822124] systemd-fstab-generator[4938]: Ignoring "noauto" option for root device
	[  +2.832292] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [250f4d2b523d67aa310faaa736ac218abede0483f0090eec335a62d5d91e8010] <==
	{"level":"info","ts":"2024-04-16T00:31:09.237559Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T00:31:09.237574Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T00:31:09.238192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac switched to configuration voters=(15657868212029965228)"}
	{"level":"info","ts":"2024-04-16T00:31:09.238315Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e5cf977c4e262fb4","local-member-id":"d94bec2e0ded43ac","added-peer-id":"d94bec2e0ded43ac","added-peer-peer-urls":["https://192.168.39.140:2380"]}
	{"level":"info","ts":"2024-04-16T00:31:09.238421Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e5cf977c4e262fb4","local-member-id":"d94bec2e0ded43ac","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T00:31:09.238472Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T00:31:09.257123Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-16T00:31:09.25731Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d94bec2e0ded43ac","initial-advertise-peer-urls":["https://192.168.39.140:2380"],"listen-peer-urls":["https://192.168.39.140:2380"],"advertise-client-urls":["https://192.168.39.140:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.140:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T00:31:09.257353Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T00:31:09.258887Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-04-16T00:31:09.258921Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-04-16T00:31:10.800987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac is starting a new election at term 3"}
	{"level":"info","ts":"2024-04-16T00:31:10.801078Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became pre-candidate at term 3"}
	{"level":"info","ts":"2024-04-16T00:31:10.80114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac received MsgPreVoteResp from d94bec2e0ded43ac at term 3"}
	{"level":"info","ts":"2024-04-16T00:31:10.801175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became candidate at term 4"}
	{"level":"info","ts":"2024-04-16T00:31:10.801186Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac received MsgVoteResp from d94bec2e0ded43ac at term 4"}
	{"level":"info","ts":"2024-04-16T00:31:10.801199Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became leader at term 4"}
	{"level":"info","ts":"2024-04-16T00:31:10.801209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d94bec2e0ded43ac elected leader d94bec2e0ded43ac at term 4"}
	{"level":"info","ts":"2024-04-16T00:31:10.80835Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T00:31:10.808946Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d94bec2e0ded43ac","local-member-attributes":"{Name:multinode-414194 ClientURLs:[https://192.168.39.140:2379]}","request-path":"/0/members/d94bec2e0ded43ac/attributes","cluster-id":"e5cf977c4e262fb4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T00:31:10.809216Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T00:31:10.809492Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T00:31:10.809572Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T00:31:10.811912Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T00:31:10.811982Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.140:2379"}
	
	
	==> etcd [c97fdfc017a2bc300f5a99f131f6ef95456d6c86023e424eff41cb8b65b77feb] <==
	{"level":"info","ts":"2024-04-16T00:30:40.130655Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T00:30:41.724202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-16T00:30:41.72424Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-16T00:30:41.724274Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac received MsgPreVoteResp from d94bec2e0ded43ac at term 2"}
	{"level":"info","ts":"2024-04-16T00:30:41.724287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became candidate at term 3"}
	{"level":"info","ts":"2024-04-16T00:30:41.724293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac received MsgVoteResp from d94bec2e0ded43ac at term 3"}
	{"level":"info","ts":"2024-04-16T00:30:41.724315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d94bec2e0ded43ac became leader at term 3"}
	{"level":"info","ts":"2024-04-16T00:30:41.724326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d94bec2e0ded43ac elected leader d94bec2e0ded43ac at term 3"}
	{"level":"info","ts":"2024-04-16T00:30:41.727611Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T00:30:41.727556Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d94bec2e0ded43ac","local-member-attributes":"{Name:multinode-414194 ClientURLs:[https://192.168.39.140:2379]}","request-path":"/0/members/d94bec2e0ded43ac/attributes","cluster-id":"e5cf977c4e262fb4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T00:30:41.729039Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T00:30:41.729263Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T00:30:41.729276Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T00:30:41.72962Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.140:2379"}
	{"level":"info","ts":"2024-04-16T00:30:41.731444Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T00:31:05.975647Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-16T00:31:05.975721Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-414194","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.140:2380"],"advertise-client-urls":["https://192.168.39.140:2379"]}
	{"level":"warn","ts":"2024-04-16T00:31:05.975863Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.140:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T00:31:05.975904Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.140:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T00:31:05.975997Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T00:31:05.976005Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-16T00:31:05.977632Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"d94bec2e0ded43ac","current-leader-member-id":"d94bec2e0ded43ac"}
	{"level":"info","ts":"2024-04-16T00:31:05.981165Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-04-16T00:31:05.98134Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.140:2380"}
	{"level":"info","ts":"2024-04-16T00:31:05.981354Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-414194","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.140:2380"],"advertise-client-urls":["https://192.168.39.140:2379"]}
	
	
	==> kernel <==
	 00:34:58 up 11 min,  0 users,  load average: 1.17, 0.48, 0.22
	Linux multinode-414194 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [08ecdae904548f8af39104f23afa280987370e7c6de4405146fe74d4adc8ea2e] <==
	I0416 00:33:54.468717       1 main.go:250] Node multinode-414194-m02 has CIDR [10.244.1.0/24] 
	I0416 00:34:04.481649       1 main.go:223] Handling node with IPs: map[192.168.39.140:{}]
	I0416 00:34:04.481953       1 main.go:227] handling current node
	I0416 00:34:04.482008       1 main.go:223] Handling node with IPs: map[192.168.39.81:{}]
	I0416 00:34:04.482033       1 main.go:250] Node multinode-414194-m02 has CIDR [10.244.1.0/24] 
	I0416 00:34:14.487585       1 main.go:223] Handling node with IPs: map[192.168.39.140:{}]
	I0416 00:34:14.487673       1 main.go:227] handling current node
	I0416 00:34:14.487696       1 main.go:223] Handling node with IPs: map[192.168.39.81:{}]
	I0416 00:34:14.487713       1 main.go:250] Node multinode-414194-m02 has CIDR [10.244.1.0/24] 
	I0416 00:34:24.500422       1 main.go:223] Handling node with IPs: map[192.168.39.140:{}]
	I0416 00:34:24.500546       1 main.go:227] handling current node
	I0416 00:34:24.500618       1 main.go:223] Handling node with IPs: map[192.168.39.81:{}]
	I0416 00:34:24.500671       1 main.go:250] Node multinode-414194-m02 has CIDR [10.244.1.0/24] 
	I0416 00:34:34.508181       1 main.go:223] Handling node with IPs: map[192.168.39.140:{}]
	I0416 00:34:34.508269       1 main.go:227] handling current node
	I0416 00:34:34.508292       1 main.go:223] Handling node with IPs: map[192.168.39.81:{}]
	I0416 00:34:34.508310       1 main.go:250] Node multinode-414194-m02 has CIDR [10.244.1.0/24] 
	I0416 00:34:44.521244       1 main.go:223] Handling node with IPs: map[192.168.39.140:{}]
	I0416 00:34:44.521509       1 main.go:227] handling current node
	I0416 00:34:44.521574       1 main.go:223] Handling node with IPs: map[192.168.39.81:{}]
	I0416 00:34:44.521601       1 main.go:250] Node multinode-414194-m02 has CIDR [10.244.1.0/24] 
	I0416 00:34:54.527221       1 main.go:223] Handling node with IPs: map[192.168.39.140:{}]
	I0416 00:34:54.527336       1 main.go:227] handling current node
	I0416 00:34:54.527371       1 main.go:223] Handling node with IPs: map[192.168.39.81:{}]
	I0416 00:34:54.527395       1 main.go:250] Node multinode-414194-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [cb28a7e68259cdb451f72bd51f7d647ab66e3b85ce31278e929148b4158bd23e] <==
	I0416 00:30:39.899271       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0416 00:30:39.899327       1 main.go:107] hostIP = 192.168.39.140
	podIP = 192.168.39.140
	I0416 00:30:39.899449       1 main.go:116] setting mtu 1500 for CNI 
	I0416 00:30:39.899465       1 main.go:146] kindnetd IP family: "ipv4"
	I0416 00:30:39.899480       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0416 00:30:43.246276       1 main.go:223] Handling node with IPs: map[192.168.39.140:{}]
	I0416 00:30:43.246328       1 main.go:227] handling current node
	I0416 00:30:43.248229       1 main.go:223] Handling node with IPs: map[192.168.39.81:{}]
	I0416 00:30:43.248337       1 main.go:250] Node multinode-414194-m02 has CIDR [10.244.1.0/24] 
	I0416 00:30:43.292713       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0416 00:30:43.292777       1 main.go:250] Node multinode-414194-m03 has CIDR [10.244.3.0/24] 
	I0416 00:30:53.300472       1 main.go:223] Handling node with IPs: map[192.168.39.140:{}]
	I0416 00:30:53.300562       1 main.go:227] handling current node
	I0416 00:30:53.300589       1 main.go:223] Handling node with IPs: map[192.168.39.81:{}]
	I0416 00:30:53.300607       1 main.go:250] Node multinode-414194-m02 has CIDR [10.244.1.0/24] 
	I0416 00:30:53.300734       1 main.go:223] Handling node with IPs: map[192.168.39.64:{}]
	I0416 00:30:53.300759       1 main.go:250] Node multinode-414194-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [6c2cb5115dbfa806690a4c862b1d19a3e52fd71b6eb9e12beb5fb9dc63eaeb79] <==
	I0416 00:30:55.776032       1 controller.go:115] Shutting down OpenAPI V3 controller
	I0416 00:30:55.776062       1 controller.go:161] Shutting down OpenAPI controller
	I0416 00:30:55.776071       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0416 00:30:55.776085       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0416 00:30:55.776127       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0416 00:30:55.776140       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0416 00:30:55.776155       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0416 00:30:55.776164       1 naming_controller.go:302] Shutting down NamingConditionController
	I0416 00:30:55.776183       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0416 00:30:55.776621       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0416 00:30:55.776718       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0416 00:30:55.776901       1 controller.go:159] Shutting down quota evaluator
	I0416 00:30:55.776993       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0416 00:30:55.777062       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0416 00:30:55.777099       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0416 00:30:55.777163       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0416 00:30:55.777199       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0416 00:30:55.777240       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0416 00:30:55.777654       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0416 00:30:55.777866       1 controller.go:178] quota evaluator worker shutdown
	I0416 00:30:55.777903       1 controller.go:178] quota evaluator worker shutdown
	I0416 00:30:55.777929       1 controller.go:178] quota evaluator worker shutdown
	I0416 00:30:55.777953       1 controller.go:178] quota evaluator worker shutdown
	I0416 00:30:55.779314       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0416 00:30:55.777003       1 controller.go:178] quota evaluator worker shutdown
	
	
	==> kube-apiserver [f865fc38c7e05fae5207cd27288c53bd9b1f203382b0f691dc6e05c6b7b3ab17] <==
	I0416 00:31:12.073111       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0416 00:31:12.073459       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0416 00:31:12.073494       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0416 00:31:12.164318       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 00:31:12.171000       1 shared_informer.go:318] Caches are synced for configmaps
	I0416 00:31:12.174218       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0416 00:31:12.178435       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0416 00:31:12.181128       1 aggregator.go:165] initial CRD sync complete...
	I0416 00:31:12.181184       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 00:31:12.181208       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 00:31:12.181231       1 cache.go:39] Caches are synced for autoregister controller
	I0416 00:31:12.192989       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0416 00:31:12.193036       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0416 00:31:12.193134       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 00:31:12.211046       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0416 00:31:12.221329       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 00:31:13.067751       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0416 00:31:13.310201       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.140]
	I0416 00:31:13.311564       1 controller.go:624] quota admission added evaluator for: endpoints
	I0416 00:31:13.317507       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 00:31:14.158702       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0416 00:31:14.314513       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0416 00:31:14.338828       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0416 00:31:14.416096       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 00:31:14.423614       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [0eb348f36dc509e79515a270e877a7510e598d118a29cb47ef41895788b779c8] <==
	I0416 00:31:56.265536       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="69.746µs"
	I0416 00:31:56.266511       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-ms6xm" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-ms6xm"
	I0416 00:32:03.371184       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-414194-m02"
	I0416 00:32:03.394042       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="45.608µs"
	I0416 00:32:03.410891       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="95.575µs"
	I0416 00:32:05.412104       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-b9fgh" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-7fdf7869d9-b9fgh"
	I0416 00:32:06.753655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="5.700336ms"
	I0416 00:32:06.754035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="49.964µs"
	I0416 00:32:21.616561       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-414194-m02"
	I0416 00:32:22.629653       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-414194-m03\" does not exist"
	I0416 00:32:22.631012       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-414194-m02"
	I0416 00:32:22.651258       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-414194-m03" podCIDRs=["10.244.2.0/24"]
	I0416 00:32:31.002639       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-414194-m02"
	I0416 00:32:36.835087       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-414194-m02"
	I0416 00:32:40.427534       1 event.go:376] "Event occurred" object="multinode-414194-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-414194-m03 event: Removing Node multinode-414194-m03 from Controller"
	I0416 00:33:15.448194       1 event.go:376] "Event occurred" object="multinode-414194-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-414194-m02 status is now: NodeNotReady"
	I0416 00:33:15.469057       1 event.go:376] "Event occurred" object="default/busybox-7fdf7869d9-b9fgh" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:33:15.481244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="14.290687ms"
	I0416 00:33:15.481332       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-7fdf7869d9" duration="42.028µs"
	I0416 00:33:15.493200       1 event.go:376] "Event occurred" object="kube-system/kindnet-pcwvx" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:33:15.507009       1 event.go:376] "Event occurred" object="kube-system/kube-proxy-2qhl9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0416 00:33:25.319697       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-9vrg8"
	I0416 00:33:25.348307       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-9vrg8"
	I0416 00:33:25.348352       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-65kpd"
	I0416 00:33:25.369534       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-65kpd"
	
	
	==> kube-controller-manager [5f7dc8a7b1688773e7beca7f7620ad25f1d7b0d0535e159594000283c4a92837] <==
	I0416 00:30:55.421473       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-414194"
	I0416 00:30:55.421609       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-414194-m03"
	I0416 00:30:55.421710       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-414194-m02"
	I0416 00:30:55.421834       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0416 00:30:55.440282       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="37.650585ms"
	I0416 00:30:55.440404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="42.226µs"
	I0416 00:30:55.559198       1 shared_informer.go:318] Caches are synced for resource quota
	I0416 00:30:55.564402       1 shared_informer.go:318] Caches are synced for resource quota
	W0416 00:30:55.809068       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PriorityLevelConfiguration: Get "https://192.168.39.140:8443/apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	E0416 00:30:55.809192       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PriorityLevelConfiguration: failed to list *v1.PriorityLevelConfiguration: Get "https://192.168.39.140:8443/apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	W0416 00:30:55.859448       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ClusterRoleBinding: Get "https://192.168.39.140:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	E0416 00:30:55.859593       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ClusterRoleBinding: failed to list *v1.ClusterRoleBinding: Get "https://192.168.39.140:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	W0416 00:30:56.761159       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ClusterRoleBinding: Get "https://192.168.39.140:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	E0416 00:30:56.761198       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ClusterRoleBinding: failed to list *v1.ClusterRoleBinding: Get "https://192.168.39.140:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	W0416 00:30:57.143917       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PriorityLevelConfiguration: Get "https://192.168.39.140:8443/apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	E0416 00:30:57.143970       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PriorityLevelConfiguration: failed to list *v1.PriorityLevelConfiguration: Get "https://192.168.39.140:8443/apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	W0416 00:30:58.622528       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ClusterRoleBinding: Get "https://192.168.39.140:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	E0416 00:30:58.622720       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ClusterRoleBinding: failed to list *v1.ClusterRoleBinding: Get "https://192.168.39.140:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	W0416 00:30:59.418769       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PriorityLevelConfiguration: Get "https://192.168.39.140:8443/apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	E0416 00:30:59.418872       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PriorityLevelConfiguration: failed to list *v1.PriorityLevelConfiguration: Get "https://192.168.39.140:8443/apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	W0416 00:31:04.054022       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ClusterRoleBinding: Get "https://192.168.39.140:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	E0416 00:31:04.054078       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ClusterRoleBinding: failed to list *v1.ClusterRoleBinding: Get "https://192.168.39.140:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	E0416 00:31:05.527022       1 controller_utils.go:203] unable to taint [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2024-04-16 00:31:05.526432719 +0000 UTC m=+26.144827065,}] unresponsive Node "multinode-414194-m02": Get "https://192.168.39.140:8443/api/v1/nodes/multinode-414194-m02?resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	W0416 00:31:05.662066       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PriorityLevelConfiguration: Get "https://192.168.39.140:8443/apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	E0416 00:31:05.662176       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PriorityLevelConfiguration: failed to list *v1.PriorityLevelConfiguration: Get "https://192.168.39.140:8443/apis/flowcontrol.apiserver.k8s.io/v1/prioritylevelconfigurations?limit=500&resourceVersion=0": dial tcp 192.168.39.140:8443: connect: connection refused
	
	
	==> kube-proxy [1199477a5e1bde603f9196d87e5b3e814fd696a077091ca14528388a47c54a86] <==
	I0416 00:30:41.006709       1 server_others.go:72] "Using iptables proxy"
	I0416 00:30:43.258460       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.140"]
	I0416 00:30:43.333263       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 00:30:43.333334       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 00:30:43.333364       1 server_others.go:168] "Using iptables Proxier"
	I0416 00:30:43.336334       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 00:30:43.336549       1 server.go:865] "Version info" version="v1.29.3"
	I0416 00:30:43.336724       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 00:30:43.338148       1 config.go:188] "Starting service config controller"
	I0416 00:30:43.338215       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 00:30:43.338258       1 config.go:97] "Starting endpoint slice config controller"
	I0416 00:30:43.338275       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 00:30:43.339692       1 config.go:315] "Starting node config controller"
	I0416 00:30:43.340366       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 00:30:43.439370       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 00:30:43.439481       1 shared_informer.go:318] Caches are synced for service config
	I0416 00:30:43.441864       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [c0ff4e26cfc61b8d5b0260e9eb83770cbc91236f8244f5ae0b56390d752d1241] <==
	I0416 00:31:03.057232       1 server_others.go:72] "Using iptables proxy"
	E0416 00:31:03.059289       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/multinode-414194\": dial tcp 192.168.39.140:8443: connect: connection refused"
	E0416 00:31:04.238139       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/multinode-414194\": dial tcp 192.168.39.140:8443: connect: connection refused"
	E0416 00:31:06.479622       1 server.go:1039] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/multinode-414194\": dial tcp 192.168.39.140:8443: connect: connection refused"
	I0416 00:31:12.244637       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.140"]
	I0416 00:31:12.349463       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 00:31:12.349707       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 00:31:12.349997       1 server_others.go:168] "Using iptables Proxier"
	I0416 00:31:12.360111       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 00:31:12.360405       1 server.go:865] "Version info" version="v1.29.3"
	I0416 00:31:12.360441       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 00:31:12.364880       1 config.go:188] "Starting service config controller"
	I0416 00:31:12.365000       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 00:31:12.365137       1 config.go:97] "Starting endpoint slice config controller"
	I0416 00:31:12.365236       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 00:31:12.367648       1 config.go:315] "Starting node config controller"
	I0416 00:31:12.367678       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 00:31:12.466304       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 00:31:12.466425       1 shared_informer.go:318] Caches are synced for service config
	I0416 00:31:12.468559       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [707d02997c5056b67d64184224cc9001aa6230ed5898761df3a5558f6da469b1] <==
	W0416 00:30:43.203329       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 00:30:43.203360       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 00:30:43.203414       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 00:30:43.203422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 00:30:43.203465       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 00:30:43.203495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0416 00:30:43.203546       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 00:30:43.203576       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 00:30:43.203639       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 00:30:43.203667       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 00:30:43.203706       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 00:30:43.203736       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 00:30:43.204558       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 00:30:43.204658       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 00:30:43.204695       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0416 00:30:43.204706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0416 00:30:43.205588       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0416 00:30:43.205692       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0416 00:30:43.210642       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0416 00:30:43.210771       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	I0416 00:30:44.181482       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 00:31:05.837534       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0416 00:31:05.837660       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0416 00:31:05.837887       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0416 00:31:05.838020       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c8c4663335b279485a3ebb281532e482a104e080a9284d76a648c00269233cbb] <==
	I0416 00:31:09.970353       1 serving.go:380] Generated self-signed cert in-memory
	W0416 00:31:12.112481       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0416 00:31:12.112535       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 00:31:12.112553       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0416 00:31:12.112560       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0416 00:31:12.235548       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.3"
	I0416 00:31:12.235635       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 00:31:12.269130       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0416 00:31:12.269242       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0416 00:31:12.287420       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0416 00:31:12.288903       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 00:31:12.390319       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 00:31:12 multinode-414194 kubelet[4192]: I0416 00:31:12.525687    4192 scope.go:117] "RemoveContainer" containerID="cb28a7e68259cdb451f72bd51f7d647ab66e3b85ce31278e929148b4158bd23e"
	Apr 16 00:31:12 multinode-414194 kubelet[4192]: I0416 00:31:12.528337    4192 scope.go:117] "RemoveContainer" containerID="8e5681569ff0fbb7477ce22d73f2e2d7dd508535650d6e8e73008d8fbe334e0f"
	Apr 16 00:31:12 multinode-414194 kubelet[4192]: I0416 00:31:12.538506    4192 scope.go:117] "RemoveContainer" containerID="0f17e92a363098fdeb203b5b88690b26d20ee0163b8c3b5931987e38331d6042"
	Apr 16 00:31:17 multinode-414194 kubelet[4192]: I0416 00:31:17.144392    4192 scope.go:117] "RemoveContainer" containerID="8e5681569ff0fbb7477ce22d73f2e2d7dd508535650d6e8e73008d8fbe334e0f"
	Apr 16 00:31:17 multinode-414194 kubelet[4192]: I0416 00:31:17.144703    4192 scope.go:117] "RemoveContainer" containerID="d245241ef5b83c185c6d37a3b77ec510f66d9cbfc8b3ee28ab93535d56219874"
	Apr 16 00:31:17 multinode-414194 kubelet[4192]: E0416 00:31:17.144973    4192 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(85c46886-a03a-43cd-a9bd-3ce8ea51f3ed)\"" pod="kube-system/storage-provisioner" podUID="85c46886-a03a-43cd-a9bd-3ce8ea51f3ed"
	Apr 16 00:31:31 multinode-414194 kubelet[4192]: I0416 00:31:31.249461    4192 scope.go:117] "RemoveContainer" containerID="d245241ef5b83c185c6d37a3b77ec510f66d9cbfc8b3ee28ab93535d56219874"
	Apr 16 00:32:08 multinode-414194 kubelet[4192]: E0416 00:32:08.277837    4192 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 00:32:08 multinode-414194 kubelet[4192]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 00:32:08 multinode-414194 kubelet[4192]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 00:32:08 multinode-414194 kubelet[4192]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 00:32:08 multinode-414194 kubelet[4192]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 00:32:08 multinode-414194 kubelet[4192]: E0416 00:32:08.374155    4192 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod00b5fef9-7a2b-4e54-bda6-b721112d5496/crio-9384fbc0adcf7e21b22fd218db6be959af37dca2db42c105f595c0147322d1d3: Error finding container 9384fbc0adcf7e21b22fd218db6be959af37dca2db42c105f595c0147322d1d3: Status 404 returned error can't find the container with id 9384fbc0adcf7e21b22fd218db6be959af37dca2db42c105f595c0147322d1d3
	Apr 16 00:33:08 multinode-414194 kubelet[4192]: E0416 00:33:08.275939    4192 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 00:33:08 multinode-414194 kubelet[4192]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 00:33:08 multinode-414194 kubelet[4192]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 00:33:08 multinode-414194 kubelet[4192]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 00:33:08 multinode-414194 kubelet[4192]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 00:33:08 multinode-414194 kubelet[4192]: E0416 00:33:08.373463    4192 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod00b5fef9-7a2b-4e54-bda6-b721112d5496/crio-9384fbc0adcf7e21b22fd218db6be959af37dca2db42c105f595c0147322d1d3: Error finding container 9384fbc0adcf7e21b22fd218db6be959af37dca2db42c105f595c0147322d1d3: Status 404 returned error can't find the container with id 9384fbc0adcf7e21b22fd218db6be959af37dca2db42c105f595c0147322d1d3
	Apr 16 00:34:08 multinode-414194 kubelet[4192]: E0416 00:34:08.280353    4192 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 00:34:08 multinode-414194 kubelet[4192]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 00:34:08 multinode-414194 kubelet[4192]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 00:34:08 multinode-414194 kubelet[4192]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 00:34:08 multinode-414194 kubelet[4192]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 00:34:08 multinode-414194 kubelet[4192]: E0416 00:34:08.376209    4192 manager.go:1116] Failed to create existing container: /kubepods/besteffort/pod00b5fef9-7a2b-4e54-bda6-b721112d5496/crio-9384fbc0adcf7e21b22fd218db6be959af37dca2db42c105f595c0147322d1d3: Error finding container 9384fbc0adcf7e21b22fd218db6be959af37dca2db42c105f595c0147322d1d3: Status 404 returned error can't find the container with id 9384fbc0adcf7e21b22fd218db6be959af37dca2db42c105f595c0147322d1d3
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:34:57.870274   46032 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18647-7542/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-414194 -n multinode-414194
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-414194 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.62s)

                                                
                                    
x
+
TestPreload (193.56s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-618697 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0416 00:38:58.679814   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-618697 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m51.872696332s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-618697 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-618697 image pull gcr.io/k8s-minikube/busybox: (2.618170764s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-618697
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-618697: (7.298074008s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-618697 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-618697 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m8.944493845s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-618697 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-04-16 00:42:00.196688942 +0000 UTC m=+3862.713450273
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-618697 -n test-preload-618697
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-618697 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-618697 logs -n 25: (1.055274645s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| ssh     | multinode-414194 ssh -n                                                                 | multinode-414194     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m03 sudo cat                                                           |                      |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n multinode-414194 sudo cat                                       | multinode-414194     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | /home/docker/cp-test_multinode-414194-m03_multinode-414194.txt                          |                      |         |                |                     |                     |
	| cp      | multinode-414194 cp multinode-414194-m03:/home/docker/cp-test.txt                       | multinode-414194     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m02:/home/docker/cp-test_multinode-414194-m03_multinode-414194-m02.txt |                      |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n                                                                 | multinode-414194     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | multinode-414194-m03 sudo cat                                                           |                      |         |                |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |                |                     |                     |
	| ssh     | multinode-414194 ssh -n multinode-414194-m02 sudo cat                                   | multinode-414194     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	|         | /home/docker/cp-test_multinode-414194-m03_multinode-414194-m02.txt                      |                      |         |                |                     |                     |
	| node    | multinode-414194 node stop m03                                                          | multinode-414194     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:26 UTC |
	| node    | multinode-414194 node start                                                             | multinode-414194     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:26 UTC | 16 Apr 24 00:27 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |                |                     |                     |
	| node    | list -p multinode-414194                                                                | multinode-414194     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:27 UTC |                     |
	| stop    | -p multinode-414194                                                                     | multinode-414194     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:27 UTC |                     |
	| start   | -p multinode-414194                                                                     | multinode-414194     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:29 UTC | 16 Apr 24 00:32 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |                |                     |                     |
	| node    | list -p multinode-414194                                                                | multinode-414194     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:32 UTC |                     |
	| node    | multinode-414194 node delete                                                            | multinode-414194     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:32 UTC | 16 Apr 24 00:32 UTC |
	|         | m03                                                                                     |                      |         |                |                     |                     |
	| stop    | multinode-414194 stop                                                                   | multinode-414194     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:32 UTC |                     |
	| start   | -p multinode-414194                                                                     | multinode-414194     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:34 UTC | 16 Apr 24 00:37 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |                |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |                |                     |                     |
	| node    | list -p multinode-414194                                                                | multinode-414194     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:37 UTC |                     |
	| start   | -p multinode-414194-m02                                                                 | multinode-414194-m02 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:37 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |                |                     |                     |
	| start   | -p multinode-414194-m03                                                                 | multinode-414194-m03 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:37 UTC | 16 Apr 24 00:38 UTC |
	|         | --driver=kvm2                                                                           |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |                |                     |                     |
	| node    | add -p multinode-414194                                                                 | multinode-414194     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:38 UTC |                     |
	| delete  | -p multinode-414194-m03                                                                 | multinode-414194-m03 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:38 UTC | 16 Apr 24 00:38 UTC |
	| delete  | -p multinode-414194                                                                     | multinode-414194     | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:38 UTC | 16 Apr 24 00:38 UTC |
	| start   | -p test-preload-618697                                                                  | test-preload-618697  | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:38 UTC | 16 Apr 24 00:40 UTC |
	|         | --memory=2200                                                                           |                      |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |                |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |                |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |                |                     |                     |
	| image   | test-preload-618697 image pull                                                          | test-preload-618697  | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:40 UTC | 16 Apr 24 00:40 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |                |                     |                     |
	| stop    | -p test-preload-618697                                                                  | test-preload-618697  | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:40 UTC | 16 Apr 24 00:40 UTC |
	| start   | -p test-preload-618697                                                                  | test-preload-618697  | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:40 UTC | 16 Apr 24 00:41 UTC |
	|         | --memory=2200                                                                           |                      |         |                |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |                |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |                |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |                |                     |                     |
	| image   | test-preload-618697 image list                                                          | test-preload-618697  | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:42 UTC | 16 Apr 24 00:42 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 00:40:51
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 00:40:51.079360   48481 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:40:51.079510   48481 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:40:51.079520   48481 out.go:304] Setting ErrFile to fd 2...
	I0416 00:40:51.079524   48481 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:40:51.079743   48481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:40:51.080246   48481 out.go:298] Setting JSON to false
	I0416 00:40:51.081193   48481 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4995,"bootTime":1713223056,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 00:40:51.081249   48481 start.go:139] virtualization: kvm guest
	I0416 00:40:51.083590   48481 out.go:177] * [test-preload-618697] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 00:40:51.085378   48481 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 00:40:51.086672   48481 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 00:40:51.085400   48481 notify.go:220] Checking for updates...
	I0416 00:40:51.089338   48481 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 00:40:51.090734   48481 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:40:51.091897   48481 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 00:40:51.093090   48481 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 00:40:51.094658   48481 config.go:182] Loaded profile config "test-preload-618697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0416 00:40:51.095048   48481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:40:51.095091   48481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:40:51.109462   48481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38781
	I0416 00:40:51.109802   48481 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:40:51.110278   48481 main.go:141] libmachine: Using API Version  1
	I0416 00:40:51.110303   48481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:40:51.110621   48481 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:40:51.110777   48481 main.go:141] libmachine: (test-preload-618697) Calling .DriverName
	I0416 00:40:51.112605   48481 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0416 00:40:51.113820   48481 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 00:40:51.114094   48481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:40:51.114124   48481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:40:51.128236   48481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35099
	I0416 00:40:51.128572   48481 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:40:51.128959   48481 main.go:141] libmachine: Using API Version  1
	I0416 00:40:51.128978   48481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:40:51.129290   48481 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:40:51.129476   48481 main.go:141] libmachine: (test-preload-618697) Calling .DriverName
	I0416 00:40:51.162274   48481 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 00:40:51.163590   48481 start.go:297] selected driver: kvm2
	I0416 00:40:51.163600   48481 start.go:901] validating driver "kvm2" against &{Name:test-preload-618697 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-618697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:40:51.163712   48481 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 00:40:51.164375   48481 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:40:51.164451   48481 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 00:40:51.178101   48481 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 00:40:51.178388   48481 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 00:40:51.178448   48481 cni.go:84] Creating CNI manager for ""
	I0416 00:40:51.178461   48481 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:40:51.178510   48481 start.go:340] cluster config:
	{Name:test-preload-618697 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-618697 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:40:51.178601   48481 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:40:51.180298   48481 out.go:177] * Starting "test-preload-618697" primary control-plane node in "test-preload-618697" cluster
	I0416 00:40:51.181522   48481 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0416 00:40:51.279617   48481 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0416 00:40:51.279641   48481 cache.go:56] Caching tarball of preloaded images
	I0416 00:40:51.279788   48481 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0416 00:40:51.281607   48481 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0416 00:40:51.282932   48481 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0416 00:40:51.384575   48481 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0416 00:41:01.951647   48481 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0416 00:41:01.951737   48481 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0416 00:41:02.791254   48481 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0416 00:41:02.791398   48481 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/test-preload-618697/config.json ...
	I0416 00:41:02.791644   48481 start.go:360] acquireMachinesLock for test-preload-618697: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:41:02.791715   48481 start.go:364] duration metric: took 47.177µs to acquireMachinesLock for "test-preload-618697"
	I0416 00:41:02.791736   48481 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:41:02.791751   48481 fix.go:54] fixHost starting: 
	I0416 00:41:02.792057   48481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:41:02.792092   48481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:41:02.806211   48481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33823
	I0416 00:41:02.806607   48481 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:41:02.807041   48481 main.go:141] libmachine: Using API Version  1
	I0416 00:41:02.807075   48481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:41:02.807353   48481 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:41:02.807493   48481 main.go:141] libmachine: (test-preload-618697) Calling .DriverName
	I0416 00:41:02.807628   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetState
	I0416 00:41:02.809313   48481 fix.go:112] recreateIfNeeded on test-preload-618697: state=Stopped err=<nil>
	I0416 00:41:02.809354   48481 main.go:141] libmachine: (test-preload-618697) Calling .DriverName
	W0416 00:41:02.809528   48481 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:41:02.814864   48481 out.go:177] * Restarting existing kvm2 VM for "test-preload-618697" ...
	I0416 00:41:02.816400   48481 main.go:141] libmachine: (test-preload-618697) Calling .Start
	I0416 00:41:02.816621   48481 main.go:141] libmachine: (test-preload-618697) Ensuring networks are active...
	I0416 00:41:02.817548   48481 main.go:141] libmachine: (test-preload-618697) Ensuring network default is active
	I0416 00:41:02.817935   48481 main.go:141] libmachine: (test-preload-618697) Ensuring network mk-test-preload-618697 is active
	I0416 00:41:02.818295   48481 main.go:141] libmachine: (test-preload-618697) Getting domain xml...
	I0416 00:41:02.819051   48481 main.go:141] libmachine: (test-preload-618697) Creating domain...
	I0416 00:41:03.982997   48481 main.go:141] libmachine: (test-preload-618697) Waiting to get IP...
	I0416 00:41:03.984032   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:03.984358   48481 main.go:141] libmachine: (test-preload-618697) DBG | unable to find current IP address of domain test-preload-618697 in network mk-test-preload-618697
	I0416 00:41:03.984437   48481 main.go:141] libmachine: (test-preload-618697) DBG | I0416 00:41:03.984350   48566 retry.go:31] will retry after 247.680055ms: waiting for machine to come up
	I0416 00:41:04.233966   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:04.234391   48481 main.go:141] libmachine: (test-preload-618697) DBG | unable to find current IP address of domain test-preload-618697 in network mk-test-preload-618697
	I0416 00:41:04.234414   48481 main.go:141] libmachine: (test-preload-618697) DBG | I0416 00:41:04.234345   48566 retry.go:31] will retry after 316.424889ms: waiting for machine to come up
	I0416 00:41:04.553064   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:04.553501   48481 main.go:141] libmachine: (test-preload-618697) DBG | unable to find current IP address of domain test-preload-618697 in network mk-test-preload-618697
	I0416 00:41:04.553531   48481 main.go:141] libmachine: (test-preload-618697) DBG | I0416 00:41:04.553471   48566 retry.go:31] will retry after 401.786169ms: waiting for machine to come up
	I0416 00:41:04.956973   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:04.957359   48481 main.go:141] libmachine: (test-preload-618697) DBG | unable to find current IP address of domain test-preload-618697 in network mk-test-preload-618697
	I0416 00:41:04.957390   48481 main.go:141] libmachine: (test-preload-618697) DBG | I0416 00:41:04.957313   48566 retry.go:31] will retry after 484.642708ms: waiting for machine to come up
	I0416 00:41:05.444045   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:05.444421   48481 main.go:141] libmachine: (test-preload-618697) DBG | unable to find current IP address of domain test-preload-618697 in network mk-test-preload-618697
	I0416 00:41:05.444449   48481 main.go:141] libmachine: (test-preload-618697) DBG | I0416 00:41:05.444387   48566 retry.go:31] will retry after 746.528696ms: waiting for machine to come up
	I0416 00:41:06.192385   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:06.192843   48481 main.go:141] libmachine: (test-preload-618697) DBG | unable to find current IP address of domain test-preload-618697 in network mk-test-preload-618697
	I0416 00:41:06.192871   48481 main.go:141] libmachine: (test-preload-618697) DBG | I0416 00:41:06.192793   48566 retry.go:31] will retry after 779.7193ms: waiting for machine to come up
	I0416 00:41:06.973664   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:06.974022   48481 main.go:141] libmachine: (test-preload-618697) DBG | unable to find current IP address of domain test-preload-618697 in network mk-test-preload-618697
	I0416 00:41:06.974055   48481 main.go:141] libmachine: (test-preload-618697) DBG | I0416 00:41:06.973967   48566 retry.go:31] will retry after 953.208701ms: waiting for machine to come up
	I0416 00:41:07.928326   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:07.928771   48481 main.go:141] libmachine: (test-preload-618697) DBG | unable to find current IP address of domain test-preload-618697 in network mk-test-preload-618697
	I0416 00:41:07.928801   48481 main.go:141] libmachine: (test-preload-618697) DBG | I0416 00:41:07.928683   48566 retry.go:31] will retry after 1.363166794s: waiting for machine to come up
	I0416 00:41:09.294335   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:09.294710   48481 main.go:141] libmachine: (test-preload-618697) DBG | unable to find current IP address of domain test-preload-618697 in network mk-test-preload-618697
	I0416 00:41:09.294746   48481 main.go:141] libmachine: (test-preload-618697) DBG | I0416 00:41:09.294675   48566 retry.go:31] will retry after 1.479232824s: waiting for machine to come up
	I0416 00:41:10.776397   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:10.776838   48481 main.go:141] libmachine: (test-preload-618697) DBG | unable to find current IP address of domain test-preload-618697 in network mk-test-preload-618697
	I0416 00:41:10.776864   48481 main.go:141] libmachine: (test-preload-618697) DBG | I0416 00:41:10.776790   48566 retry.go:31] will retry after 1.397596124s: waiting for machine to come up
	I0416 00:41:12.176506   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:12.176918   48481 main.go:141] libmachine: (test-preload-618697) DBG | unable to find current IP address of domain test-preload-618697 in network mk-test-preload-618697
	I0416 00:41:12.176941   48481 main.go:141] libmachine: (test-preload-618697) DBG | I0416 00:41:12.176889   48566 retry.go:31] will retry after 2.818207559s: waiting for machine to come up
	I0416 00:41:14.996342   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:14.996804   48481 main.go:141] libmachine: (test-preload-618697) DBG | unable to find current IP address of domain test-preload-618697 in network mk-test-preload-618697
	I0416 00:41:14.996839   48481 main.go:141] libmachine: (test-preload-618697) DBG | I0416 00:41:14.996753   48566 retry.go:31] will retry after 3.534560165s: waiting for machine to come up
	I0416 00:41:18.532739   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:18.533211   48481 main.go:141] libmachine: (test-preload-618697) DBG | unable to find current IP address of domain test-preload-618697 in network mk-test-preload-618697
	I0416 00:41:18.533238   48481 main.go:141] libmachine: (test-preload-618697) DBG | I0416 00:41:18.533145   48566 retry.go:31] will retry after 3.08866111s: waiting for machine to come up
	I0416 00:41:21.625310   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:21.625796   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has current primary IP address 192.168.39.234 and MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:21.625817   48481 main.go:141] libmachine: (test-preload-618697) Found IP for machine: 192.168.39.234
	I0416 00:41:21.625826   48481 main.go:141] libmachine: (test-preload-618697) Reserving static IP address...
	I0416 00:41:21.626156   48481 main.go:141] libmachine: (test-preload-618697) DBG | found host DHCP lease matching {name: "test-preload-618697", mac: "52:54:00:05:60:89", ip: "192.168.39.234"} in network mk-test-preload-618697: {Iface:virbr1 ExpiryTime:2024-04-16 01:41:14 +0000 UTC Type:0 Mac:52:54:00:05:60:89 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-618697 Clientid:01:52:54:00:05:60:89}
	I0416 00:41:21.626177   48481 main.go:141] libmachine: (test-preload-618697) Reserved static IP address: 192.168.39.234
	I0416 00:41:21.626197   48481 main.go:141] libmachine: (test-preload-618697) DBG | skip adding static IP to network mk-test-preload-618697 - found existing host DHCP lease matching {name: "test-preload-618697", mac: "52:54:00:05:60:89", ip: "192.168.39.234"}
	I0416 00:41:21.626215   48481 main.go:141] libmachine: (test-preload-618697) DBG | Getting to WaitForSSH function...
	I0416 00:41:21.626232   48481 main.go:141] libmachine: (test-preload-618697) Waiting for SSH to be available...
	I0416 00:41:21.628253   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:21.628556   48481 main.go:141] libmachine: (test-preload-618697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:60:89", ip: ""} in network mk-test-preload-618697: {Iface:virbr1 ExpiryTime:2024-04-16 01:41:14 +0000 UTC Type:0 Mac:52:54:00:05:60:89 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-618697 Clientid:01:52:54:00:05:60:89}
	I0416 00:41:21.628580   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined IP address 192.168.39.234 and MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:21.628705   48481 main.go:141] libmachine: (test-preload-618697) DBG | Using SSH client type: external
	I0416 00:41:21.628742   48481 main.go:141] libmachine: (test-preload-618697) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/test-preload-618697/id_rsa (-rw-------)
	I0416 00:41:21.628774   48481 main.go:141] libmachine: (test-preload-618697) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/test-preload-618697/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 00:41:21.628789   48481 main.go:141] libmachine: (test-preload-618697) DBG | About to run SSH command:
	I0416 00:41:21.628801   48481 main.go:141] libmachine: (test-preload-618697) DBG | exit 0
	I0416 00:41:21.748788   48481 main.go:141] libmachine: (test-preload-618697) DBG | SSH cmd err, output: <nil>: 
	I0416 00:41:21.749138   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetConfigRaw
	I0416 00:41:21.749731   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetIP
	I0416 00:41:21.752145   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:21.752456   48481 main.go:141] libmachine: (test-preload-618697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:60:89", ip: ""} in network mk-test-preload-618697: {Iface:virbr1 ExpiryTime:2024-04-16 01:41:14 +0000 UTC Type:0 Mac:52:54:00:05:60:89 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-618697 Clientid:01:52:54:00:05:60:89}
	I0416 00:41:21.752493   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined IP address 192.168.39.234 and MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:21.752649   48481 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/test-preload-618697/config.json ...
	I0416 00:41:21.752840   48481 machine.go:94] provisionDockerMachine start ...
	I0416 00:41:21.752858   48481 main.go:141] libmachine: (test-preload-618697) Calling .DriverName
	I0416 00:41:21.753046   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHHostname
	I0416 00:41:21.755281   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:21.755600   48481 main.go:141] libmachine: (test-preload-618697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:60:89", ip: ""} in network mk-test-preload-618697: {Iface:virbr1 ExpiryTime:2024-04-16 01:41:14 +0000 UTC Type:0 Mac:52:54:00:05:60:89 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-618697 Clientid:01:52:54:00:05:60:89}
	I0416 00:41:21.755638   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined IP address 192.168.39.234 and MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:21.755755   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHPort
	I0416 00:41:21.755915   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHKeyPath
	I0416 00:41:21.756079   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHKeyPath
	I0416 00:41:21.756205   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHUsername
	I0416 00:41:21.756379   48481 main.go:141] libmachine: Using SSH client type: native
	I0416 00:41:21.756570   48481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0416 00:41:21.756583   48481 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:41:21.857710   48481 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 00:41:21.857740   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetMachineName
	I0416 00:41:21.857994   48481 buildroot.go:166] provisioning hostname "test-preload-618697"
	I0416 00:41:21.858014   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetMachineName
	I0416 00:41:21.858245   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHHostname
	I0416 00:41:21.860718   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:21.861065   48481 main.go:141] libmachine: (test-preload-618697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:60:89", ip: ""} in network mk-test-preload-618697: {Iface:virbr1 ExpiryTime:2024-04-16 01:41:14 +0000 UTC Type:0 Mac:52:54:00:05:60:89 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-618697 Clientid:01:52:54:00:05:60:89}
	I0416 00:41:21.861099   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined IP address 192.168.39.234 and MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:21.861234   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHPort
	I0416 00:41:21.861444   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHKeyPath
	I0416 00:41:21.861589   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHKeyPath
	I0416 00:41:21.861729   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHUsername
	I0416 00:41:21.861879   48481 main.go:141] libmachine: Using SSH client type: native
	I0416 00:41:21.862052   48481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0416 00:41:21.862064   48481 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-618697 && echo "test-preload-618697" | sudo tee /etc/hostname
	I0416 00:41:21.981742   48481 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-618697
	
	I0416 00:41:21.981771   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHHostname
	I0416 00:41:21.984368   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:21.984806   48481 main.go:141] libmachine: (test-preload-618697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:60:89", ip: ""} in network mk-test-preload-618697: {Iface:virbr1 ExpiryTime:2024-04-16 01:41:14 +0000 UTC Type:0 Mac:52:54:00:05:60:89 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-618697 Clientid:01:52:54:00:05:60:89}
	I0416 00:41:21.984830   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined IP address 192.168.39.234 and MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:21.985062   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHPort
	I0416 00:41:21.985278   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHKeyPath
	I0416 00:41:21.985458   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHKeyPath
	I0416 00:41:21.985598   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHUsername
	I0416 00:41:21.985750   48481 main.go:141] libmachine: Using SSH client type: native
	I0416 00:41:21.985926   48481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0416 00:41:21.985950   48481 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-618697' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-618697/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-618697' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:41:22.090176   48481 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:41:22.090215   48481 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:41:22.090233   48481 buildroot.go:174] setting up certificates
	I0416 00:41:22.090241   48481 provision.go:84] configureAuth start
	I0416 00:41:22.090250   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetMachineName
	I0416 00:41:22.090525   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetIP
	I0416 00:41:22.093033   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:22.093381   48481 main.go:141] libmachine: (test-preload-618697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:60:89", ip: ""} in network mk-test-preload-618697: {Iface:virbr1 ExpiryTime:2024-04-16 01:41:14 +0000 UTC Type:0 Mac:52:54:00:05:60:89 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-618697 Clientid:01:52:54:00:05:60:89}
	I0416 00:41:22.093423   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined IP address 192.168.39.234 and MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:22.093572   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHHostname
	I0416 00:41:22.095378   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:22.095704   48481 main.go:141] libmachine: (test-preload-618697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:60:89", ip: ""} in network mk-test-preload-618697: {Iface:virbr1 ExpiryTime:2024-04-16 01:41:14 +0000 UTC Type:0 Mac:52:54:00:05:60:89 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-618697 Clientid:01:52:54:00:05:60:89}
	I0416 00:41:22.095736   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined IP address 192.168.39.234 and MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:22.095802   48481 provision.go:143] copyHostCerts
	I0416 00:41:22.095853   48481 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:41:22.095871   48481 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:41:22.095930   48481 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:41:22.096022   48481 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:41:22.096033   48481 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:41:22.096065   48481 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:41:22.096121   48481 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:41:22.096129   48481 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:41:22.096148   48481 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:41:22.096201   48481 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.test-preload-618697 san=[127.0.0.1 192.168.39.234 localhost minikube test-preload-618697]
	I0416 00:41:22.202759   48481 provision.go:177] copyRemoteCerts
	I0416 00:41:22.202811   48481 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:41:22.202841   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHHostname
	I0416 00:41:22.205716   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:22.206075   48481 main.go:141] libmachine: (test-preload-618697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:60:89", ip: ""} in network mk-test-preload-618697: {Iface:virbr1 ExpiryTime:2024-04-16 01:41:14 +0000 UTC Type:0 Mac:52:54:00:05:60:89 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-618697 Clientid:01:52:54:00:05:60:89}
	I0416 00:41:22.206093   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined IP address 192.168.39.234 and MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:22.206296   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHPort
	I0416 00:41:22.206518   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHKeyPath
	I0416 00:41:22.206669   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHUsername
	I0416 00:41:22.206805   48481 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/test-preload-618697/id_rsa Username:docker}
	I0416 00:41:22.290114   48481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:41:22.317724   48481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0416 00:41:22.343875   48481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 00:41:22.369508   48481 provision.go:87] duration metric: took 279.254085ms to configureAuth
	I0416 00:41:22.369533   48481 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:41:22.369738   48481 config.go:182] Loaded profile config "test-preload-618697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0416 00:41:22.369821   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHHostname
	I0416 00:41:22.372443   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:22.372871   48481 main.go:141] libmachine: (test-preload-618697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:60:89", ip: ""} in network mk-test-preload-618697: {Iface:virbr1 ExpiryTime:2024-04-16 01:41:14 +0000 UTC Type:0 Mac:52:54:00:05:60:89 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-618697 Clientid:01:52:54:00:05:60:89}
	I0416 00:41:22.372897   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined IP address 192.168.39.234 and MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:22.373038   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHPort
	I0416 00:41:22.373217   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHKeyPath
	I0416 00:41:22.373329   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHKeyPath
	I0416 00:41:22.373473   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHUsername
	I0416 00:41:22.373687   48481 main.go:141] libmachine: Using SSH client type: native
	I0416 00:41:22.373891   48481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0416 00:41:22.373908   48481 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:41:22.639232   48481 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:41:22.639308   48481 machine.go:97] duration metric: took 886.452024ms to provisionDockerMachine
	I0416 00:41:22.639324   48481 start.go:293] postStartSetup for "test-preload-618697" (driver="kvm2")
	I0416 00:41:22.639340   48481 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:41:22.639366   48481 main.go:141] libmachine: (test-preload-618697) Calling .DriverName
	I0416 00:41:22.639721   48481 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:41:22.639753   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHHostname
	I0416 00:41:22.642388   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:22.642760   48481 main.go:141] libmachine: (test-preload-618697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:60:89", ip: ""} in network mk-test-preload-618697: {Iface:virbr1 ExpiryTime:2024-04-16 01:41:14 +0000 UTC Type:0 Mac:52:54:00:05:60:89 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-618697 Clientid:01:52:54:00:05:60:89}
	I0416 00:41:22.642785   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined IP address 192.168.39.234 and MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:22.642979   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHPort
	I0416 00:41:22.643159   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHKeyPath
	I0416 00:41:22.643295   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHUsername
	I0416 00:41:22.643423   48481 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/test-preload-618697/id_rsa Username:docker}
	I0416 00:41:22.726549   48481 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:41:22.732854   48481 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:41:22.732910   48481 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:41:22.733202   48481 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:41:22.733593   48481 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:41:22.733863   48481 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:41:22.746280   48481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:41:22.773987   48481 start.go:296] duration metric: took 134.649133ms for postStartSetup
	I0416 00:41:22.774030   48481 fix.go:56] duration metric: took 19.982284508s for fixHost
	I0416 00:41:22.774051   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHHostname
	I0416 00:41:22.776966   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:22.777315   48481 main.go:141] libmachine: (test-preload-618697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:60:89", ip: ""} in network mk-test-preload-618697: {Iface:virbr1 ExpiryTime:2024-04-16 01:41:14 +0000 UTC Type:0 Mac:52:54:00:05:60:89 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-618697 Clientid:01:52:54:00:05:60:89}
	I0416 00:41:22.777359   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined IP address 192.168.39.234 and MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:22.777540   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHPort
	I0416 00:41:22.777800   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHKeyPath
	I0416 00:41:22.777987   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHKeyPath
	I0416 00:41:22.778212   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHUsername
	I0416 00:41:22.778447   48481 main.go:141] libmachine: Using SSH client type: native
	I0416 00:41:22.778605   48481 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.234 22 <nil> <nil>}
	I0416 00:41:22.778616   48481 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 00:41:22.878452   48481 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713228082.848611097
	
	I0416 00:41:22.878490   48481 fix.go:216] guest clock: 1713228082.848611097
	I0416 00:41:22.878499   48481 fix.go:229] Guest: 2024-04-16 00:41:22.848611097 +0000 UTC Remote: 2024-04-16 00:41:22.774035246 +0000 UTC m=+31.738916322 (delta=74.575851ms)
	I0416 00:41:22.878544   48481 fix.go:200] guest clock delta is within tolerance: 74.575851ms
	I0416 00:41:22.878550   48481 start.go:83] releasing machines lock for "test-preload-618697", held for 20.08682155s
	I0416 00:41:22.878571   48481 main.go:141] libmachine: (test-preload-618697) Calling .DriverName
	I0416 00:41:22.878815   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetIP
	I0416 00:41:22.881346   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:22.881747   48481 main.go:141] libmachine: (test-preload-618697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:60:89", ip: ""} in network mk-test-preload-618697: {Iface:virbr1 ExpiryTime:2024-04-16 01:41:14 +0000 UTC Type:0 Mac:52:54:00:05:60:89 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-618697 Clientid:01:52:54:00:05:60:89}
	I0416 00:41:22.881796   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined IP address 192.168.39.234 and MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:22.881938   48481 main.go:141] libmachine: (test-preload-618697) Calling .DriverName
	I0416 00:41:22.882412   48481 main.go:141] libmachine: (test-preload-618697) Calling .DriverName
	I0416 00:41:22.882566   48481 main.go:141] libmachine: (test-preload-618697) Calling .DriverName
	I0416 00:41:22.882671   48481 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:41:22.882716   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHHostname
	I0416 00:41:22.882757   48481 ssh_runner.go:195] Run: cat /version.json
	I0416 00:41:22.882779   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHHostname
	I0416 00:41:22.885434   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:22.885491   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:22.885744   48481 main.go:141] libmachine: (test-preload-618697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:60:89", ip: ""} in network mk-test-preload-618697: {Iface:virbr1 ExpiryTime:2024-04-16 01:41:14 +0000 UTC Type:0 Mac:52:54:00:05:60:89 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-618697 Clientid:01:52:54:00:05:60:89}
	I0416 00:41:22.885777   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined IP address 192.168.39.234 and MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:22.886030   48481 main.go:141] libmachine: (test-preload-618697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:60:89", ip: ""} in network mk-test-preload-618697: {Iface:virbr1 ExpiryTime:2024-04-16 01:41:14 +0000 UTC Type:0 Mac:52:54:00:05:60:89 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-618697 Clientid:01:52:54:00:05:60:89}
	I0416 00:41:22.886036   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHPort
	I0416 00:41:22.886059   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined IP address 192.168.39.234 and MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:22.886194   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHPort
	I0416 00:41:22.886277   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHKeyPath
	I0416 00:41:22.886336   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHKeyPath
	I0416 00:41:22.886404   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHUsername
	I0416 00:41:22.886457   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHUsername
	I0416 00:41:22.886546   48481 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/test-preload-618697/id_rsa Username:docker}
	I0416 00:41:22.886715   48481 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/test-preload-618697/id_rsa Username:docker}
	I0416 00:41:22.995912   48481 ssh_runner.go:195] Run: systemctl --version
	I0416 00:41:23.002195   48481 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:41:23.144861   48481 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 00:41:23.151661   48481 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:41:23.151737   48481 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:41:23.168040   48481 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 00:41:23.168070   48481 start.go:494] detecting cgroup driver to use...
	I0416 00:41:23.168140   48481 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:41:23.188551   48481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:41:23.203597   48481 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:41:23.203651   48481 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:41:23.217860   48481 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:41:23.231917   48481 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:41:23.344908   48481 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 00:41:23.496797   48481 docker.go:233] disabling docker service ...
	I0416 00:41:23.496876   48481 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 00:41:23.511730   48481 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 00:41:23.524526   48481 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 00:41:23.636052   48481 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 00:41:23.746160   48481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 00:41:23.760615   48481 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 00:41:23.787315   48481 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0416 00:41:23.787387   48481 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:41:23.797513   48481 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 00:41:23.797571   48481 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:41:23.807684   48481 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:41:23.817706   48481 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:41:23.827790   48481 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 00:41:23.838296   48481 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:41:23.848404   48481 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:41:23.865433   48481 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:41:23.875316   48481 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 00:41:23.884199   48481 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 00:41:23.884240   48481 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 00:41:23.896539   48481 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 00:41:23.906240   48481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:41:24.022595   48481 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 00:41:24.157874   48481 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 00:41:24.157952   48481 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 00:41:24.162646   48481 start.go:562] Will wait 60s for crictl version
	I0416 00:41:24.162706   48481 ssh_runner.go:195] Run: which crictl
	I0416 00:41:24.166296   48481 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 00:41:24.206338   48481 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 00:41:24.206413   48481 ssh_runner.go:195] Run: crio --version
	I0416 00:41:24.233902   48481 ssh_runner.go:195] Run: crio --version
	I0416 00:41:24.263873   48481 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0416 00:41:24.265298   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetIP
	I0416 00:41:24.267887   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:24.268213   48481 main.go:141] libmachine: (test-preload-618697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:60:89", ip: ""} in network mk-test-preload-618697: {Iface:virbr1 ExpiryTime:2024-04-16 01:41:14 +0000 UTC Type:0 Mac:52:54:00:05:60:89 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-618697 Clientid:01:52:54:00:05:60:89}
	I0416 00:41:24.268244   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined IP address 192.168.39.234 and MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:24.268481   48481 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 00:41:24.272472   48481 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:41:24.284729   48481 kubeadm.go:877] updating cluster {Name:test-preload-618697 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-618697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 00:41:24.284847   48481 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0416 00:41:24.284884   48481 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:41:24.319253   48481 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0416 00:41:24.319330   48481 ssh_runner.go:195] Run: which lz4
	I0416 00:41:24.323244   48481 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 00:41:24.327480   48481 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 00:41:24.327501   48481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0416 00:41:26.019527   48481 crio.go:462] duration metric: took 1.696302353s to copy over tarball
	I0416 00:41:26.019633   48481 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 00:41:28.503168   48481 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.483491442s)
	I0416 00:41:28.503191   48481 crio.go:469] duration metric: took 2.483636448s to extract the tarball
	I0416 00:41:28.503197   48481 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 00:41:28.544511   48481 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:41:28.585871   48481 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0416 00:41:28.585892   48481 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 00:41:28.585964   48481 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:41:28.585982   48481 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0416 00:41:28.585982   48481 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0416 00:41:28.586007   48481 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0416 00:41:28.586027   48481 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0416 00:41:28.586037   48481 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0416 00:41:28.586083   48481 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0416 00:41:28.586122   48481 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0416 00:41:28.587625   48481 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0416 00:41:28.587647   48481 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0416 00:41:28.587626   48481 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0416 00:41:28.587625   48481 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0416 00:41:28.587628   48481 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:41:28.587626   48481 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0416 00:41:28.587630   48481 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0416 00:41:28.587920   48481 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0416 00:41:28.771809   48481 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0416 00:41:28.794741   48481 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0416 00:41:28.797866   48481 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0416 00:41:28.801331   48481 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0416 00:41:28.802586   48481 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0416 00:41:28.803747   48481 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0416 00:41:28.825482   48481 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0416 00:41:28.843108   48481 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0416 00:41:28.843152   48481 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0416 00:41:28.843204   48481 ssh_runner.go:195] Run: which crictl
	I0416 00:41:28.909053   48481 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0416 00:41:28.909101   48481 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0416 00:41:28.909150   48481 ssh_runner.go:195] Run: which crictl
	I0416 00:41:28.948776   48481 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0416 00:41:28.948827   48481 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0416 00:41:28.948879   48481 ssh_runner.go:195] Run: which crictl
	I0416 00:41:28.953078   48481 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0416 00:41:28.953117   48481 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0416 00:41:28.953117   48481 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0416 00:41:28.953136   48481 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0416 00:41:28.953145   48481 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0416 00:41:28.953177   48481 ssh_runner.go:195] Run: which crictl
	I0416 00:41:28.953177   48481 ssh_runner.go:195] Run: which crictl
	I0416 00:41:28.953183   48481 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0416 00:41:28.953329   48481 ssh_runner.go:195] Run: which crictl
	I0416 00:41:28.973272   48481 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0416 00:41:28.973306   48481 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0416 00:41:28.973343   48481 ssh_runner.go:195] Run: which crictl
	I0416 00:41:28.973380   48481 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0416 00:41:28.973446   48481 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0416 00:41:28.973467   48481 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0416 00:41:28.973526   48481 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0416 00:41:28.973599   48481 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0416 00:41:28.973625   48481 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0416 00:41:29.112996   48481 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0416 00:41:29.113101   48481 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0416 00:41:29.113109   48481 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0416 00:41:29.113134   48481 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0416 00:41:29.113221   48481 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0416 00:41:29.118674   48481 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0416 00:41:29.118748   48481 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0416 00:41:29.118785   48481 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0416 00:41:29.118836   48481 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0416 00:41:29.118787   48481 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0416 00:41:29.118841   48481 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0416 00:41:29.118906   48481 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0416 00:41:29.118914   48481 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0416 00:41:29.160353   48481 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0416 00:41:29.160406   48481 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0416 00:41:29.160422   48481 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0416 00:41:29.160451   48481 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0416 00:41:29.160459   48481 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0416 00:41:29.160465   48481 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0416 00:41:29.160545   48481 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0416 00:41:29.160566   48481 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0416 00:41:29.160609   48481 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0416 00:41:29.160647   48481 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0416 00:41:29.393848   48481 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:41:31.815038   48481 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.654559851s)
	I0416 00:41:31.815087   48481 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0416 00:41:31.815090   48481 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.654588863s)
	I0416 00:41:31.815114   48481 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0416 00:41:31.815141   48481 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.421263233s)
	I0416 00:41:31.815144   48481 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0416 00:41:31.815219   48481 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0416 00:41:32.566528   48481 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0416 00:41:32.566572   48481 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0416 00:41:32.566619   48481 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0416 00:41:33.014498   48481 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0416 00:41:33.014542   48481 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0416 00:41:33.014600   48481 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0416 00:41:33.866735   48481 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0416 00:41:33.866791   48481 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0416 00:41:33.866850   48481 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0416 00:41:35.921192   48481 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.054317946s)
	I0416 00:41:35.921221   48481 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0416 00:41:35.921243   48481 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0416 00:41:35.921288   48481 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0416 00:41:36.264216   48481 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0416 00:41:36.264269   48481 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0416 00:41:36.264326   48481 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0416 00:41:37.013395   48481 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0416 00:41:37.013432   48481 cache_images.go:123] Successfully loaded all cached images
	I0416 00:41:37.013437   48481 cache_images.go:92] duration metric: took 8.427535757s to LoadCachedImages
	I0416 00:41:37.013447   48481 kubeadm.go:928] updating node { 192.168.39.234 8443 v1.24.4 crio true true} ...
	I0416 00:41:37.013561   48481 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-618697 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-618697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 00:41:37.013645   48481 ssh_runner.go:195] Run: crio config
	I0416 00:41:37.062657   48481 cni.go:84] Creating CNI manager for ""
	I0416 00:41:37.062687   48481 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:41:37.062704   48481 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 00:41:37.062727   48481 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.234 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-618697 NodeName:test-preload-618697 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 00:41:37.062899   48481 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-618697"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.234
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.234"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 00:41:37.062979   48481 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0416 00:41:37.072804   48481 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 00:41:37.072869   48481 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 00:41:37.082062   48481 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0416 00:41:37.098883   48481 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 00:41:37.115247   48481 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0416 00:41:37.132292   48481 ssh_runner.go:195] Run: grep 192.168.39.234	control-plane.minikube.internal$ /etc/hosts
	I0416 00:41:37.136197   48481 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.234	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:41:37.147864   48481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:41:37.278225   48481 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 00:41:37.295251   48481 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/test-preload-618697 for IP: 192.168.39.234
	I0416 00:41:37.295278   48481 certs.go:194] generating shared ca certs ...
	I0416 00:41:37.295309   48481 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:41:37.295496   48481 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 00:41:37.295549   48481 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 00:41:37.295562   48481 certs.go:256] generating profile certs ...
	I0416 00:41:37.295682   48481 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/test-preload-618697/client.key
	I0416 00:41:37.295761   48481 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/test-preload-618697/apiserver.key.8a3b59a8
	I0416 00:41:37.295817   48481 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/test-preload-618697/proxy-client.key
	I0416 00:41:37.295976   48481 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 00:41:37.296018   48481 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 00:41:37.296027   48481 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 00:41:37.296063   48481 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 00:41:37.296091   48481 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 00:41:37.296118   48481 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 00:41:37.296212   48481 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:41:37.296992   48481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 00:41:37.333767   48481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 00:41:37.369347   48481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 00:41:37.403615   48481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 00:41:37.435992   48481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/test-preload-618697/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0416 00:41:37.474801   48481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/test-preload-618697/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 00:41:37.509609   48481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/test-preload-618697/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 00:41:37.534314   48481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/test-preload-618697/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 00:41:37.558471   48481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 00:41:37.582014   48481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 00:41:37.605342   48481 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 00:41:37.628669   48481 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 00:41:37.645236   48481 ssh_runner.go:195] Run: openssl version
	I0416 00:41:37.650892   48481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 00:41:37.661368   48481 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:41:37.665825   48481 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:41:37.665885   48481 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:41:37.671327   48481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 00:41:37.681677   48481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 00:41:37.692252   48481 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 00:41:37.696992   48481 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 00:41:37.697041   48481 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 00:41:37.702822   48481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 00:41:37.713308   48481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 00:41:37.723722   48481 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 00:41:37.728393   48481 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 00:41:37.728443   48481 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 00:41:37.734052   48481 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 00:41:37.744561   48481 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 00:41:37.749331   48481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 00:41:37.755546   48481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 00:41:37.761388   48481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 00:41:37.767265   48481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 00:41:37.772895   48481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 00:41:37.778649   48481 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 00:41:37.784292   48481 kubeadm.go:391] StartCluster: {Name:test-preload-618697 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-618697 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:41:37.784365   48481 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 00:41:37.784431   48481 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:41:37.822326   48481 cri.go:89] found id: ""
	I0416 00:41:37.822408   48481 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 00:41:37.833413   48481 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 00:41:37.833434   48481 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 00:41:37.833439   48481 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 00:41:37.833483   48481 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 00:41:37.843792   48481 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:41:37.844240   48481 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-618697" does not appear in /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 00:41:37.844349   48481 kubeconfig.go:62] /home/jenkins/minikube-integration/18647-7542/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-618697" cluster setting kubeconfig missing "test-preload-618697" context setting]
	I0416 00:41:37.844590   48481 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:41:37.845218   48481 kapi.go:59] client config for test-preload-618697: &rest.Config{Host:"https://192.168.39.234:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/profiles/test-preload-618697/client.crt", KeyFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/profiles/test-preload-618697/client.key", CAFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 00:41:37.845788   48481 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 00:41:37.855364   48481 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.234
	I0416 00:41:37.855388   48481 kubeadm.go:1154] stopping kube-system containers ...
	I0416 00:41:37.855397   48481 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 00:41:37.855440   48481 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:41:37.892187   48481 cri.go:89] found id: ""
	I0416 00:41:37.892261   48481 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 00:41:37.908251   48481 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 00:41:37.917873   48481 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 00:41:37.917895   48481 kubeadm.go:156] found existing configuration files:
	
	I0416 00:41:37.917950   48481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 00:41:37.927022   48481 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 00:41:37.927076   48481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 00:41:37.936766   48481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 00:41:37.945777   48481 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 00:41:37.945829   48481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 00:41:37.955121   48481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 00:41:37.963953   48481 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 00:41:37.963995   48481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 00:41:37.973127   48481 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 00:41:37.981839   48481 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 00:41:37.981872   48481 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 00:41:37.990953   48481 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 00:41:38.000128   48481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 00:41:38.084608   48481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 00:41:39.094445   48481 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.009799617s)
	I0416 00:41:39.094480   48481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 00:41:39.351553   48481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 00:41:39.423966   48481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 00:41:39.487209   48481 api_server.go:52] waiting for apiserver process to appear ...
	I0416 00:41:39.487348   48481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:41:39.987390   48481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:41:40.487879   48481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:41:40.510613   48481 api_server.go:72] duration metric: took 1.023403342s to wait for apiserver process to appear ...
	I0416 00:41:40.510645   48481 api_server.go:88] waiting for apiserver healthz status ...
	I0416 00:41:40.510668   48481 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0416 00:41:40.511149   48481 api_server.go:269] stopped: https://192.168.39.234:8443/healthz: Get "https://192.168.39.234:8443/healthz": dial tcp 192.168.39.234:8443: connect: connection refused
	I0416 00:41:41.011744   48481 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0416 00:41:44.930161   48481 api_server.go:279] https://192.168.39.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 00:41:44.930183   48481 api_server.go:103] status: https://192.168.39.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 00:41:44.930197   48481 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0416 00:41:44.940122   48481 api_server.go:279] https://192.168.39.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 00:41:44.940146   48481 api_server.go:103] status: https://192.168.39.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 00:41:45.011314   48481 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0416 00:41:45.028575   48481 api_server.go:279] https://192.168.39.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 00:41:45.028611   48481 api_server.go:103] status: https://192.168.39.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 00:41:45.511142   48481 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0416 00:41:45.519814   48481 api_server.go:279] https://192.168.39.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 00:41:45.519841   48481 api_server.go:103] status: https://192.168.39.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 00:41:46.011496   48481 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0416 00:41:46.020136   48481 api_server.go:279] https://192.168.39.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 00:41:46.020177   48481 api_server.go:103] status: https://192.168.39.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 00:41:46.511257   48481 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0416 00:41:46.516797   48481 api_server.go:279] https://192.168.39.234:8443/healthz returned 200:
	ok
	I0416 00:41:46.524716   48481 api_server.go:141] control plane version: v1.24.4
	I0416 00:41:46.524741   48481 api_server.go:131] duration metric: took 6.014089204s to wait for apiserver health ...
	I0416 00:41:46.524750   48481 cni.go:84] Creating CNI manager for ""
	I0416 00:41:46.524756   48481 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:41:46.526606   48481 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 00:41:46.528276   48481 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 00:41:46.542954   48481 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 00:41:46.571829   48481 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 00:41:46.584754   48481 system_pods.go:59] 8 kube-system pods found
	I0416 00:41:46.584783   48481 system_pods.go:61] "coredns-6d4b75cb6d-2lxbc" [487b3379-ed23-4c7d-878e-7b8dd8a622ce] Running
	I0416 00:41:46.584788   48481 system_pods.go:61] "coredns-6d4b75cb6d-6cs62" [c3291d5f-169f-4b59-9fb0-a6bd12f87237] Running
	I0416 00:41:46.584791   48481 system_pods.go:61] "etcd-test-preload-618697" [b4b4eb86-4c9b-47e5-a5c0-2099c9a4e612] Running
	I0416 00:41:46.584797   48481 system_pods.go:61] "kube-apiserver-test-preload-618697" [d1722d3a-6a0c-4ead-9349-b2b4139f899c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 00:41:46.584803   48481 system_pods.go:61] "kube-controller-manager-test-preload-618697" [a4c79aa8-819a-452e-bb26-4787714b45b9] Running
	I0416 00:41:46.584810   48481 system_pods.go:61] "kube-proxy-8qpfk" [3f3ab60d-ec15-4898-8025-6e1000951d59] Running
	I0416 00:41:46.584817   48481 system_pods.go:61] "kube-scheduler-test-preload-618697" [95082cf0-1116-4f91-b21b-3d9bb58e1a14] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 00:41:46.584825   48481 system_pods.go:61] "storage-provisioner" [113db0bf-89e8-4fe0-93d6-f5e024960c49] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 00:41:46.584841   48481 system_pods.go:74] duration metric: took 12.990809ms to wait for pod list to return data ...
	I0416 00:41:46.584851   48481 node_conditions.go:102] verifying NodePressure condition ...
	I0416 00:41:46.591679   48481 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 00:41:46.591707   48481 node_conditions.go:123] node cpu capacity is 2
	I0416 00:41:46.591721   48481 node_conditions.go:105] duration metric: took 6.86294ms to run NodePressure ...
	I0416 00:41:46.591741   48481 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 00:41:46.855918   48481 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 00:41:46.860543   48481 kubeadm.go:733] kubelet initialised
	I0416 00:41:46.860566   48481 kubeadm.go:734] duration metric: took 4.623715ms waiting for restarted kubelet to initialise ...
	I0416 00:41:46.860576   48481 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 00:41:46.866441   48481 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-2lxbc" in "kube-system" namespace to be "Ready" ...
	I0416 00:41:46.872936   48481 pod_ready.go:97] node "test-preload-618697" hosting pod "coredns-6d4b75cb6d-2lxbc" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-618697" has status "Ready":"False"
	I0416 00:41:46.872968   48481 pod_ready.go:81] duration metric: took 6.49101ms for pod "coredns-6d4b75cb6d-2lxbc" in "kube-system" namespace to be "Ready" ...
	E0416 00:41:46.872981   48481 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-618697" hosting pod "coredns-6d4b75cb6d-2lxbc" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-618697" has status "Ready":"False"
	I0416 00:41:46.873005   48481 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-618697" in "kube-system" namespace to be "Ready" ...
	I0416 00:41:46.881020   48481 pod_ready.go:97] node "test-preload-618697" hosting pod "etcd-test-preload-618697" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-618697" has status "Ready":"False"
	I0416 00:41:46.881049   48481 pod_ready.go:81] duration metric: took 8.028443ms for pod "etcd-test-preload-618697" in "kube-system" namespace to be "Ready" ...
	E0416 00:41:46.881061   48481 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-618697" hosting pod "etcd-test-preload-618697" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-618697" has status "Ready":"False"
	I0416 00:41:46.881069   48481 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-618697" in "kube-system" namespace to be "Ready" ...
	I0416 00:41:46.887996   48481 pod_ready.go:97] node "test-preload-618697" hosting pod "kube-apiserver-test-preload-618697" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-618697" has status "Ready":"False"
	I0416 00:41:46.888020   48481 pod_ready.go:81] duration metric: took 6.93586ms for pod "kube-apiserver-test-preload-618697" in "kube-system" namespace to be "Ready" ...
	E0416 00:41:46.888030   48481 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-618697" hosting pod "kube-apiserver-test-preload-618697" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-618697" has status "Ready":"False"
	I0416 00:41:46.888038   48481 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-618697" in "kube-system" namespace to be "Ready" ...
	I0416 00:41:46.977107   48481 pod_ready.go:97] node "test-preload-618697" hosting pod "kube-controller-manager-test-preload-618697" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-618697" has status "Ready":"False"
	I0416 00:41:46.977137   48481 pod_ready.go:81] duration metric: took 89.086249ms for pod "kube-controller-manager-test-preload-618697" in "kube-system" namespace to be "Ready" ...
	E0416 00:41:46.977145   48481 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-618697" hosting pod "kube-controller-manager-test-preload-618697" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-618697" has status "Ready":"False"
	I0416 00:41:46.977151   48481 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8qpfk" in "kube-system" namespace to be "Ready" ...
	I0416 00:41:47.375808   48481 pod_ready.go:97] node "test-preload-618697" hosting pod "kube-proxy-8qpfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-618697" has status "Ready":"False"
	I0416 00:41:47.375835   48481 pod_ready.go:81] duration metric: took 398.662297ms for pod "kube-proxy-8qpfk" in "kube-system" namespace to be "Ready" ...
	E0416 00:41:47.375844   48481 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-618697" hosting pod "kube-proxy-8qpfk" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-618697" has status "Ready":"False"
	I0416 00:41:47.375850   48481 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-618697" in "kube-system" namespace to be "Ready" ...
	I0416 00:41:47.774436   48481 pod_ready.go:97] node "test-preload-618697" hosting pod "kube-scheduler-test-preload-618697" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-618697" has status "Ready":"False"
	I0416 00:41:47.774463   48481 pod_ready.go:81] duration metric: took 398.605918ms for pod "kube-scheduler-test-preload-618697" in "kube-system" namespace to be "Ready" ...
	E0416 00:41:47.774481   48481 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-618697" hosting pod "kube-scheduler-test-preload-618697" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-618697" has status "Ready":"False"
	I0416 00:41:47.774490   48481 pod_ready.go:38] duration metric: took 913.903849ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 00:41:47.774511   48481 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 00:41:47.787335   48481 ops.go:34] apiserver oom_adj: -16
	I0416 00:41:47.787352   48481 kubeadm.go:591] duration metric: took 9.953907205s to restartPrimaryControlPlane
	I0416 00:41:47.787361   48481 kubeadm.go:393] duration metric: took 10.003073926s to StartCluster
	I0416 00:41:47.787380   48481 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:41:47.787451   48481 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 00:41:47.788150   48481 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:41:47.788413   48481 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.234 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 00:41:47.790907   48481 out.go:177] * Verifying Kubernetes components...
	I0416 00:41:47.788485   48481 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 00:41:47.788661   48481 config.go:182] Loaded profile config "test-preload-618697": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0416 00:41:47.790955   48481 addons.go:69] Setting storage-provisioner=true in profile "test-preload-618697"
	I0416 00:41:47.790981   48481 addons.go:234] Setting addon storage-provisioner=true in "test-preload-618697"
	I0416 00:41:47.790989   48481 addons.go:69] Setting default-storageclass=true in profile "test-preload-618697"
	W0416 00:41:47.790996   48481 addons.go:243] addon storage-provisioner should already be in state true
	I0416 00:41:47.791007   48481 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-618697"
	I0416 00:41:47.792234   48481 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:41:47.791025   48481 host.go:66] Checking if "test-preload-618697" exists ...
	I0416 00:41:47.792578   48481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:41:47.792615   48481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:41:47.792619   48481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:41:47.792657   48481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:41:47.807416   48481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45853
	I0416 00:41:47.807609   48481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34291
	I0416 00:41:47.807874   48481 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:41:47.808076   48481 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:41:47.808392   48481 main.go:141] libmachine: Using API Version  1
	I0416 00:41:47.808414   48481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:41:47.808512   48481 main.go:141] libmachine: Using API Version  1
	I0416 00:41:47.808548   48481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:41:47.808750   48481 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:41:47.808873   48481 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:41:47.808910   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetState
	I0416 00:41:47.809384   48481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:41:47.809422   48481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:41:47.811134   48481 kapi.go:59] client config for test-preload-618697: &rest.Config{Host:"https://192.168.39.234:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/profiles/test-preload-618697/client.crt", KeyFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/profiles/test-preload-618697/client.key", CAFile:"/home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c5e000), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0416 00:41:47.811388   48481 addons.go:234] Setting addon default-storageclass=true in "test-preload-618697"
	W0416 00:41:47.811404   48481 addons.go:243] addon default-storageclass should already be in state true
	I0416 00:41:47.811427   48481 host.go:66] Checking if "test-preload-618697" exists ...
	I0416 00:41:47.811737   48481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:41:47.811776   48481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:41:47.824641   48481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36863
	I0416 00:41:47.825153   48481 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:41:47.825717   48481 main.go:141] libmachine: Using API Version  1
	I0416 00:41:47.825739   48481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:41:47.826088   48481 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:41:47.826247   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetState
	I0416 00:41:47.826312   48481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39365
	I0416 00:41:47.826679   48481 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:41:47.827194   48481 main.go:141] libmachine: Using API Version  1
	I0416 00:41:47.827223   48481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:41:47.827575   48481 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:41:47.828063   48481 main.go:141] libmachine: (test-preload-618697) Calling .DriverName
	I0416 00:41:47.828168   48481 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:41:47.828214   48481 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:41:47.830196   48481 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:41:47.831716   48481 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 00:41:47.831731   48481 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 00:41:47.831746   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHHostname
	I0416 00:41:47.835396   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:47.835799   48481 main.go:141] libmachine: (test-preload-618697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:60:89", ip: ""} in network mk-test-preload-618697: {Iface:virbr1 ExpiryTime:2024-04-16 01:41:14 +0000 UTC Type:0 Mac:52:54:00:05:60:89 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-618697 Clientid:01:52:54:00:05:60:89}
	I0416 00:41:47.835842   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined IP address 192.168.39.234 and MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:47.836028   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHPort
	I0416 00:41:47.836244   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHKeyPath
	I0416 00:41:47.836419   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHUsername
	I0416 00:41:47.836586   48481 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/test-preload-618697/id_rsa Username:docker}
	I0416 00:41:47.843922   48481 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39895
	I0416 00:41:47.844303   48481 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:41:47.844867   48481 main.go:141] libmachine: Using API Version  1
	I0416 00:41:47.844881   48481 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:41:47.845197   48481 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:41:47.845409   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetState
	I0416 00:41:47.846995   48481 main.go:141] libmachine: (test-preload-618697) Calling .DriverName
	I0416 00:41:47.847273   48481 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 00:41:47.847290   48481 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 00:41:47.847312   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHHostname
	I0416 00:41:47.850132   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:47.850592   48481 main.go:141] libmachine: (test-preload-618697) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:60:89", ip: ""} in network mk-test-preload-618697: {Iface:virbr1 ExpiryTime:2024-04-16 01:41:14 +0000 UTC Type:0 Mac:52:54:00:05:60:89 Iaid: IPaddr:192.168.39.234 Prefix:24 Hostname:test-preload-618697 Clientid:01:52:54:00:05:60:89}
	I0416 00:41:47.850628   48481 main.go:141] libmachine: (test-preload-618697) DBG | domain test-preload-618697 has defined IP address 192.168.39.234 and MAC address 52:54:00:05:60:89 in network mk-test-preload-618697
	I0416 00:41:47.850745   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHPort
	I0416 00:41:47.850922   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHKeyPath
	I0416 00:41:47.851098   48481 main.go:141] libmachine: (test-preload-618697) Calling .GetSSHUsername
	I0416 00:41:47.851230   48481 sshutil.go:53] new ssh client: &{IP:192.168.39.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/test-preload-618697/id_rsa Username:docker}
	I0416 00:41:47.971467   48481 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 00:41:47.990543   48481 node_ready.go:35] waiting up to 6m0s for node "test-preload-618697" to be "Ready" ...
	I0416 00:41:48.104466   48481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 00:41:48.134616   48481 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 00:41:49.071244   48481 main.go:141] libmachine: Making call to close driver server
	I0416 00:41:49.071267   48481 main.go:141] libmachine: Making call to close driver server
	I0416 00:41:49.071280   48481 main.go:141] libmachine: (test-preload-618697) Calling .Close
	I0416 00:41:49.071271   48481 main.go:141] libmachine: (test-preload-618697) Calling .Close
	I0416 00:41:49.071595   48481 main.go:141] libmachine: Successfully made call to close driver server
	I0416 00:41:49.071614   48481 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 00:41:49.071624   48481 main.go:141] libmachine: Making call to close driver server
	I0416 00:41:49.071624   48481 main.go:141] libmachine: Successfully made call to close driver server
	I0416 00:41:49.071631   48481 main.go:141] libmachine: (test-preload-618697) Calling .Close
	I0416 00:41:49.071637   48481 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 00:41:49.071647   48481 main.go:141] libmachine: Making call to close driver server
	I0416 00:41:49.071654   48481 main.go:141] libmachine: (test-preload-618697) Calling .Close
	I0416 00:41:49.071848   48481 main.go:141] libmachine: Successfully made call to close driver server
	I0416 00:41:49.071859   48481 main.go:141] libmachine: Successfully made call to close driver server
	I0416 00:41:49.071861   48481 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 00:41:49.071868   48481 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 00:41:49.077425   48481 main.go:141] libmachine: Making call to close driver server
	I0416 00:41:49.077442   48481 main.go:141] libmachine: (test-preload-618697) Calling .Close
	I0416 00:41:49.077686   48481 main.go:141] libmachine: Successfully made call to close driver server
	I0416 00:41:49.077707   48481 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 00:41:49.077707   48481 main.go:141] libmachine: (test-preload-618697) DBG | Closing plugin on server side
	I0416 00:41:49.079656   48481 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0416 00:41:49.080811   48481 addons.go:505] duration metric: took 1.292332319s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0416 00:41:49.994746   48481 node_ready.go:53] node "test-preload-618697" has status "Ready":"False"
	I0416 00:41:52.494196   48481 node_ready.go:53] node "test-preload-618697" has status "Ready":"False"
	I0416 00:41:54.494339   48481 node_ready.go:53] node "test-preload-618697" has status "Ready":"False"
	I0416 00:41:55.494255   48481 node_ready.go:49] node "test-preload-618697" has status "Ready":"True"
	I0416 00:41:55.494277   48481 node_ready.go:38] duration metric: took 7.503708028s for node "test-preload-618697" to be "Ready" ...
	I0416 00:41:55.494301   48481 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 00:41:55.500004   48481 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-2lxbc" in "kube-system" namespace to be "Ready" ...
	I0416 00:41:55.504450   48481 pod_ready.go:92] pod "coredns-6d4b75cb6d-2lxbc" in "kube-system" namespace has status "Ready":"True"
	I0416 00:41:55.504466   48481 pod_ready.go:81] duration metric: took 4.439655ms for pod "coredns-6d4b75cb6d-2lxbc" in "kube-system" namespace to be "Ready" ...
	I0416 00:41:55.504473   48481 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-618697" in "kube-system" namespace to be "Ready" ...
	I0416 00:41:55.508274   48481 pod_ready.go:92] pod "etcd-test-preload-618697" in "kube-system" namespace has status "Ready":"True"
	I0416 00:41:55.508296   48481 pod_ready.go:81] duration metric: took 3.816245ms for pod "etcd-test-preload-618697" in "kube-system" namespace to be "Ready" ...
	I0416 00:41:55.508308   48481 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-618697" in "kube-system" namespace to be "Ready" ...
	I0416 00:41:57.515445   48481 pod_ready.go:102] pod "kube-apiserver-test-preload-618697" in "kube-system" namespace has status "Ready":"False"
	I0416 00:41:58.516137   48481 pod_ready.go:92] pod "kube-apiserver-test-preload-618697" in "kube-system" namespace has status "Ready":"True"
	I0416 00:41:58.516160   48481 pod_ready.go:81] duration metric: took 3.007844002s for pod "kube-apiserver-test-preload-618697" in "kube-system" namespace to be "Ready" ...
	I0416 00:41:58.516168   48481 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-618697" in "kube-system" namespace to be "Ready" ...
	I0416 00:41:58.522286   48481 pod_ready.go:92] pod "kube-controller-manager-test-preload-618697" in "kube-system" namespace has status "Ready":"True"
	I0416 00:41:58.522306   48481 pod_ready.go:81] duration metric: took 6.131558ms for pod "kube-controller-manager-test-preload-618697" in "kube-system" namespace to be "Ready" ...
	I0416 00:41:58.522314   48481 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8qpfk" in "kube-system" namespace to be "Ready" ...
	I0416 00:41:58.528970   48481 pod_ready.go:92] pod "kube-proxy-8qpfk" in "kube-system" namespace has status "Ready":"True"
	I0416 00:41:58.528987   48481 pod_ready.go:81] duration metric: took 6.667759ms for pod "kube-proxy-8qpfk" in "kube-system" namespace to be "Ready" ...
	I0416 00:41:58.528994   48481 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-618697" in "kube-system" namespace to be "Ready" ...
	I0416 00:41:59.094518   48481 pod_ready.go:92] pod "kube-scheduler-test-preload-618697" in "kube-system" namespace has status "Ready":"True"
	I0416 00:41:59.094541   48481 pod_ready.go:81] duration metric: took 565.540746ms for pod "kube-scheduler-test-preload-618697" in "kube-system" namespace to be "Ready" ...
	I0416 00:41:59.094551   48481 pod_ready.go:38] duration metric: took 3.600235199s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 00:41:59.094564   48481 api_server.go:52] waiting for apiserver process to appear ...
	I0416 00:41:59.094622   48481 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:41:59.109481   48481 api_server.go:72] duration metric: took 11.321034659s to wait for apiserver process to appear ...
	I0416 00:41:59.109503   48481 api_server.go:88] waiting for apiserver healthz status ...
	I0416 00:41:59.109525   48481 api_server.go:253] Checking apiserver healthz at https://192.168.39.234:8443/healthz ...
	I0416 00:41:59.114317   48481 api_server.go:279] https://192.168.39.234:8443/healthz returned 200:
	ok
	I0416 00:41:59.115203   48481 api_server.go:141] control plane version: v1.24.4
	I0416 00:41:59.115224   48481 api_server.go:131] duration metric: took 5.713746ms to wait for apiserver health ...
	I0416 00:41:59.115233   48481 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 00:41:59.297127   48481 system_pods.go:59] 7 kube-system pods found
	I0416 00:41:59.297164   48481 system_pods.go:61] "coredns-6d4b75cb6d-2lxbc" [487b3379-ed23-4c7d-878e-7b8dd8a622ce] Running
	I0416 00:41:59.297169   48481 system_pods.go:61] "etcd-test-preload-618697" [b4b4eb86-4c9b-47e5-a5c0-2099c9a4e612] Running
	I0416 00:41:59.297177   48481 system_pods.go:61] "kube-apiserver-test-preload-618697" [d1722d3a-6a0c-4ead-9349-b2b4139f899c] Running
	I0416 00:41:59.297181   48481 system_pods.go:61] "kube-controller-manager-test-preload-618697" [a4c79aa8-819a-452e-bb26-4787714b45b9] Running
	I0416 00:41:59.297183   48481 system_pods.go:61] "kube-proxy-8qpfk" [3f3ab60d-ec15-4898-8025-6e1000951d59] Running
	I0416 00:41:59.297186   48481 system_pods.go:61] "kube-scheduler-test-preload-618697" [95082cf0-1116-4f91-b21b-3d9bb58e1a14] Running
	I0416 00:41:59.297190   48481 system_pods.go:61] "storage-provisioner" [113db0bf-89e8-4fe0-93d6-f5e024960c49] Running
	I0416 00:41:59.297196   48481 system_pods.go:74] duration metric: took 181.957622ms to wait for pod list to return data ...
	I0416 00:41:59.297203   48481 default_sa.go:34] waiting for default service account to be created ...
	I0416 00:41:59.494711   48481 default_sa.go:45] found service account: "default"
	I0416 00:41:59.494735   48481 default_sa.go:55] duration metric: took 197.525102ms for default service account to be created ...
	I0416 00:41:59.494745   48481 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 00:41:59.697645   48481 system_pods.go:86] 7 kube-system pods found
	I0416 00:41:59.697670   48481 system_pods.go:89] "coredns-6d4b75cb6d-2lxbc" [487b3379-ed23-4c7d-878e-7b8dd8a622ce] Running
	I0416 00:41:59.697677   48481 system_pods.go:89] "etcd-test-preload-618697" [b4b4eb86-4c9b-47e5-a5c0-2099c9a4e612] Running
	I0416 00:41:59.697681   48481 system_pods.go:89] "kube-apiserver-test-preload-618697" [d1722d3a-6a0c-4ead-9349-b2b4139f899c] Running
	I0416 00:41:59.697685   48481 system_pods.go:89] "kube-controller-manager-test-preload-618697" [a4c79aa8-819a-452e-bb26-4787714b45b9] Running
	I0416 00:41:59.697689   48481 system_pods.go:89] "kube-proxy-8qpfk" [3f3ab60d-ec15-4898-8025-6e1000951d59] Running
	I0416 00:41:59.697693   48481 system_pods.go:89] "kube-scheduler-test-preload-618697" [95082cf0-1116-4f91-b21b-3d9bb58e1a14] Running
	I0416 00:41:59.697697   48481 system_pods.go:89] "storage-provisioner" [113db0bf-89e8-4fe0-93d6-f5e024960c49] Running
	I0416 00:41:59.697702   48481 system_pods.go:126] duration metric: took 202.953533ms to wait for k8s-apps to be running ...
	I0416 00:41:59.697709   48481 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 00:41:59.697758   48481 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:41:59.712679   48481 system_svc.go:56] duration metric: took 14.960281ms WaitForService to wait for kubelet
	I0416 00:41:59.712707   48481 kubeadm.go:576] duration metric: took 11.924265016s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 00:41:59.712729   48481 node_conditions.go:102] verifying NodePressure condition ...
	I0416 00:41:59.895225   48481 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 00:41:59.895246   48481 node_conditions.go:123] node cpu capacity is 2
	I0416 00:41:59.895260   48481 node_conditions.go:105] duration metric: took 182.521012ms to run NodePressure ...
	I0416 00:41:59.895270   48481 start.go:240] waiting for startup goroutines ...
	I0416 00:41:59.895277   48481 start.go:245] waiting for cluster config update ...
	I0416 00:41:59.895285   48481 start.go:254] writing updated cluster config ...
	I0416 00:41:59.895533   48481 ssh_runner.go:195] Run: rm -f paused
	I0416 00:41:59.943090   48481 start.go:600] kubectl: 1.29.3, cluster: 1.24.4 (minor skew: 5)
	I0416 00:41:59.944977   48481 out.go:177] 
	W0416 00:41:59.946393   48481 out.go:239] ! /usr/local/bin/kubectl is version 1.29.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0416 00:41:59.947568   48481 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0416 00:41:59.948917   48481 out.go:177] * Done! kubectl is now configured to use "test-preload-618697" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.829968579Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713228120829947486,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=537e0e78-7c7c-4727-8d60-b07fad6b70e4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.830707867Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d219206-7f31-4592-9ad0-bbfd773a9551 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.830756187Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d219206-7f31-4592-9ad0-bbfd773a9551 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.830929525Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:293038a9f1b722c05e0fc0b75f3d4f763bcb9996a03db13c5ae7b0537d383f84,PodSandboxId:bfdb3f1ce6aa301e2ff7138ad53149dfa8867fb0d3297c8d2a451d1e00627142,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1713228113717483016,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-2lxbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 487b3379-ed23-4c7d-878e-7b8dd8a622ce,},Annotations:map[string]string{io.kubernetes.container.hash: e857b886,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c75706ec72bd75b03c873aa08360b72c82e82eb7c843a25f15fa1eafe74119,PodSandboxId:a66dcd9cc0fafb2def0b126f40fae4fd05a2fc32c77ecb68c4fc08aa5571077e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1713228106478701363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qpfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3f3ab60d-ec15-4898-8025-6e1000951d59,},Annotations:map[string]string{io.kubernetes.container.hash: 9f45e4c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f11fbe01bcfee286fc933a3eb118052aba4ca190c77e2d646b68063521f7bd,PodSandboxId:f17497020d919f3e4ef5c62e0b6e526a17290b2def1ec7f870a5a50cbaea5e13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713228106190088499,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11
3db0bf-89e8-4fe0-93d6-f5e024960c49,},Annotations:map[string]string{io.kubernetes.container.hash: ce3984b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5696f4fd05af7c1a1d7697e99df2de404d84081b46c2f69381b0064d357cef1b,PodSandboxId:5ef6dee583dd5fbbc7c0043b5174e54bfb425d70f291017991b7e96832d88bff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1713228100295088930,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-618697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9107fff55
a39e4ea72bdb46377230f46,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25278389368e457bb2538598f1c67e4a849a4ed083e93d4db0fbc63ffd910d83,PodSandboxId:0b7ad62786286ce85b28af712351afcf4db15ec55c33704abe0e31ad3507a621,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1713228100217082731,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-618697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 624676528d950755299f40776bfbe3af,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1794a03855b004fc3e11ae0b1a948a251cc0a6b9098cd377614fb728db5030,PodSandboxId:d4aaf41b4692af0af6fa1cea3d67d451ea7c7bb838b2299ab57c6634ff652908,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1713228100209750521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-618697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43488d0cc52eaaf351c326aa80040d8,}
,Annotations:map[string]string{io.kubernetes.container.hash: 328d1f44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae922470d5f3eab28262309ed28f6d59a0a3fdc2200999ac0a1561cb18661076,PodSandboxId:a93c6c2d4724bda14e8ad798987d94773e89daa42326f9c38f8523102559955f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1713228100192236725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-618697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc8293a675127e564ed1d36fa33941c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 525081ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d219206-7f31-4592-9ad0-bbfd773a9551 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.867031665Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=af221053-f8c4-4cdf-87a4-96a79f1722b7 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.867098304Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=af221053-f8c4-4cdf-87a4-96a79f1722b7 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.868135741Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9aaa5f5-4ee9-4489-bc2a-692366f74070 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.868794295Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713228120868635165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9aaa5f5-4ee9-4489-bc2a-692366f74070 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.869554425Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=646d2f1f-bd08-4e54-86f5-17fd02053571 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.869603032Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=646d2f1f-bd08-4e54-86f5-17fd02053571 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.869930001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:293038a9f1b722c05e0fc0b75f3d4f763bcb9996a03db13c5ae7b0537d383f84,PodSandboxId:bfdb3f1ce6aa301e2ff7138ad53149dfa8867fb0d3297c8d2a451d1e00627142,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1713228113717483016,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-2lxbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 487b3379-ed23-4c7d-878e-7b8dd8a622ce,},Annotations:map[string]string{io.kubernetes.container.hash: e857b886,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c75706ec72bd75b03c873aa08360b72c82e82eb7c843a25f15fa1eafe74119,PodSandboxId:a66dcd9cc0fafb2def0b126f40fae4fd05a2fc32c77ecb68c4fc08aa5571077e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1713228106478701363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qpfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3f3ab60d-ec15-4898-8025-6e1000951d59,},Annotations:map[string]string{io.kubernetes.container.hash: 9f45e4c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f11fbe01bcfee286fc933a3eb118052aba4ca190c77e2d646b68063521f7bd,PodSandboxId:f17497020d919f3e4ef5c62e0b6e526a17290b2def1ec7f870a5a50cbaea5e13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713228106190088499,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11
3db0bf-89e8-4fe0-93d6-f5e024960c49,},Annotations:map[string]string{io.kubernetes.container.hash: ce3984b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5696f4fd05af7c1a1d7697e99df2de404d84081b46c2f69381b0064d357cef1b,PodSandboxId:5ef6dee583dd5fbbc7c0043b5174e54bfb425d70f291017991b7e96832d88bff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1713228100295088930,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-618697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9107fff55
a39e4ea72bdb46377230f46,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25278389368e457bb2538598f1c67e4a849a4ed083e93d4db0fbc63ffd910d83,PodSandboxId:0b7ad62786286ce85b28af712351afcf4db15ec55c33704abe0e31ad3507a621,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1713228100217082731,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-618697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 624676528d950755299f40776bfbe3af,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1794a03855b004fc3e11ae0b1a948a251cc0a6b9098cd377614fb728db5030,PodSandboxId:d4aaf41b4692af0af6fa1cea3d67d451ea7c7bb838b2299ab57c6634ff652908,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1713228100209750521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-618697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43488d0cc52eaaf351c326aa80040d8,}
,Annotations:map[string]string{io.kubernetes.container.hash: 328d1f44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae922470d5f3eab28262309ed28f6d59a0a3fdc2200999ac0a1561cb18661076,PodSandboxId:a93c6c2d4724bda14e8ad798987d94773e89daa42326f9c38f8523102559955f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1713228100192236725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-618697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc8293a675127e564ed1d36fa33941c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 525081ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=646d2f1f-bd08-4e54-86f5-17fd02053571 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.906098245Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cc7341f5-3878-417d-8420-cf306c196bd4 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.906172809Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cc7341f5-3878-417d-8420-cf306c196bd4 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.907865375Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ba03971-2bdd-4646-9030-b1d06d610b2c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.908388763Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713228120908362907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ba03971-2bdd-4646-9030-b1d06d610b2c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.908867716Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f526811b-e9a5-4d77-bcc5-3a13aad4729e name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.908913493Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f526811b-e9a5-4d77-bcc5-3a13aad4729e name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.909083705Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:293038a9f1b722c05e0fc0b75f3d4f763bcb9996a03db13c5ae7b0537d383f84,PodSandboxId:bfdb3f1ce6aa301e2ff7138ad53149dfa8867fb0d3297c8d2a451d1e00627142,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1713228113717483016,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-2lxbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 487b3379-ed23-4c7d-878e-7b8dd8a622ce,},Annotations:map[string]string{io.kubernetes.container.hash: e857b886,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c75706ec72bd75b03c873aa08360b72c82e82eb7c843a25f15fa1eafe74119,PodSandboxId:a66dcd9cc0fafb2def0b126f40fae4fd05a2fc32c77ecb68c4fc08aa5571077e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1713228106478701363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qpfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3f3ab60d-ec15-4898-8025-6e1000951d59,},Annotations:map[string]string{io.kubernetes.container.hash: 9f45e4c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f11fbe01bcfee286fc933a3eb118052aba4ca190c77e2d646b68063521f7bd,PodSandboxId:f17497020d919f3e4ef5c62e0b6e526a17290b2def1ec7f870a5a50cbaea5e13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713228106190088499,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11
3db0bf-89e8-4fe0-93d6-f5e024960c49,},Annotations:map[string]string{io.kubernetes.container.hash: ce3984b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5696f4fd05af7c1a1d7697e99df2de404d84081b46c2f69381b0064d357cef1b,PodSandboxId:5ef6dee583dd5fbbc7c0043b5174e54bfb425d70f291017991b7e96832d88bff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1713228100295088930,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-618697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9107fff55
a39e4ea72bdb46377230f46,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25278389368e457bb2538598f1c67e4a849a4ed083e93d4db0fbc63ffd910d83,PodSandboxId:0b7ad62786286ce85b28af712351afcf4db15ec55c33704abe0e31ad3507a621,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1713228100217082731,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-618697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 624676528d950755299f40776bfbe3af,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1794a03855b004fc3e11ae0b1a948a251cc0a6b9098cd377614fb728db5030,PodSandboxId:d4aaf41b4692af0af6fa1cea3d67d451ea7c7bb838b2299ab57c6634ff652908,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1713228100209750521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-618697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43488d0cc52eaaf351c326aa80040d8,}
,Annotations:map[string]string{io.kubernetes.container.hash: 328d1f44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae922470d5f3eab28262309ed28f6d59a0a3fdc2200999ac0a1561cb18661076,PodSandboxId:a93c6c2d4724bda14e8ad798987d94773e89daa42326f9c38f8523102559955f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1713228100192236725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-618697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc8293a675127e564ed1d36fa33941c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 525081ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f526811b-e9a5-4d77-bcc5-3a13aad4729e name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.945329553Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=42b2e76a-d5f0-45c9-9cd2-b30508e95c64 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.945400220Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42b2e76a-d5f0-45c9-9cd2-b30508e95c64 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.946388327Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5c71b072-7441-4c0d-a6b6-4173dcfd2ce9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.946852317Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713228120946828933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c71b072-7441-4c0d-a6b6-4173dcfd2ce9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.947379389Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57b8a178-4e8f-4422-9528-3c3294c944d4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.947434045Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=57b8a178-4e8f-4422-9528-3c3294c944d4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:42:00 test-preload-618697 crio[689]: time="2024-04-16 00:42:00.947589836Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:293038a9f1b722c05e0fc0b75f3d4f763bcb9996a03db13c5ae7b0537d383f84,PodSandboxId:bfdb3f1ce6aa301e2ff7138ad53149dfa8867fb0d3297c8d2a451d1e00627142,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1713228113717483016,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-2lxbc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 487b3379-ed23-4c7d-878e-7b8dd8a622ce,},Annotations:map[string]string{io.kubernetes.container.hash: e857b886,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60c75706ec72bd75b03c873aa08360b72c82e82eb7c843a25f15fa1eafe74119,PodSandboxId:a66dcd9cc0fafb2def0b126f40fae4fd05a2fc32c77ecb68c4fc08aa5571077e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1713228106478701363,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8qpfk,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 3f3ab60d-ec15-4898-8025-6e1000951d59,},Annotations:map[string]string{io.kubernetes.container.hash: 9f45e4c2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:16f11fbe01bcfee286fc933a3eb118052aba4ca190c77e2d646b68063521f7bd,PodSandboxId:f17497020d919f3e4ef5c62e0b6e526a17290b2def1ec7f870a5a50cbaea5e13,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713228106190088499,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11
3db0bf-89e8-4fe0-93d6-f5e024960c49,},Annotations:map[string]string{io.kubernetes.container.hash: ce3984b4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5696f4fd05af7c1a1d7697e99df2de404d84081b46c2f69381b0064d357cef1b,PodSandboxId:5ef6dee583dd5fbbc7c0043b5174e54bfb425d70f291017991b7e96832d88bff,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1713228100295088930,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-618697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9107fff55
a39e4ea72bdb46377230f46,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25278389368e457bb2538598f1c67e4a849a4ed083e93d4db0fbc63ffd910d83,PodSandboxId:0b7ad62786286ce85b28af712351afcf4db15ec55c33704abe0e31ad3507a621,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1713228100217082731,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-618697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 624676528d950755299f40776bfbe3af,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e1794a03855b004fc3e11ae0b1a948a251cc0a6b9098cd377614fb728db5030,PodSandboxId:d4aaf41b4692af0af6fa1cea3d67d451ea7c7bb838b2299ab57c6634ff652908,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1713228100209750521,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-618697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b43488d0cc52eaaf351c326aa80040d8,}
,Annotations:map[string]string{io.kubernetes.container.hash: 328d1f44,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae922470d5f3eab28262309ed28f6d59a0a3fdc2200999ac0a1561cb18661076,PodSandboxId:a93c6c2d4724bda14e8ad798987d94773e89daa42326f9c38f8523102559955f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1713228100192236725,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-618697,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0dc8293a675127e564ed1d36fa33941c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 525081ab,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=57b8a178-4e8f-4422-9528-3c3294c944d4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	293038a9f1b72       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   bfdb3f1ce6aa3       coredns-6d4b75cb6d-2lxbc
	60c75706ec72b       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   a66dcd9cc0faf       kube-proxy-8qpfk
	16f11fbe01bcf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   f17497020d919       storage-provisioner
	5696f4fd05af7       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   5ef6dee583dd5       kube-scheduler-test-preload-618697
	25278389368e4       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   0b7ad62786286       kube-controller-manager-test-preload-618697
	6e1794a03855b       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   d4aaf41b4692a       etcd-test-preload-618697
	ae922470d5f3e       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   a93c6c2d4724b       kube-apiserver-test-preload-618697
	
	
	==> coredns [293038a9f1b722c05e0fc0b75f3d4f763bcb9996a03db13c5ae7b0537d383f84] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:53781 - 3080 "HINFO IN 6513174949617877842.3698501409680951426. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009925003s
	
	
	==> describe nodes <==
	Name:               test-preload-618697
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-618697
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=test-preload-618697
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T00_40_23_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 00:40:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-618697
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:41:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 00:41:55 +0000   Tue, 16 Apr 2024 00:40:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 00:41:55 +0000   Tue, 16 Apr 2024 00:40:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 00:41:55 +0000   Tue, 16 Apr 2024 00:40:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 00:41:55 +0000   Tue, 16 Apr 2024 00:41:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.234
	  Hostname:    test-preload-618697
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e8e8a63ff4304f0f94e94e35f4249cf1
	  System UUID:                e8e8a63f-f430-4f0f-94e9-4e35f4249cf1
	  Boot ID:                    112d6f80-e1f9-4c8a-855b-f6d31824f12a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-2lxbc                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     85s
	  kube-system                 etcd-test-preload-618697                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         98s
	  kube-system                 kube-apiserver-test-preload-618697             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-controller-manager-test-preload-618697    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-proxy-8qpfk                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         85s
	  kube-system                 kube-scheduler-test-preload-618697             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  Starting                 84s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  106s (x4 over 106s)  kubelet          Node test-preload-618697 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x4 over 106s)  kubelet          Node test-preload-618697 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x4 over 106s)  kubelet          Node test-preload-618697 status is now: NodeHasSufficientPID
	  Normal  Starting                 98s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  98s                  kubelet          Node test-preload-618697 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s                  kubelet          Node test-preload-618697 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s                  kubelet          Node test-preload-618697 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  98s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                88s                  kubelet          Node test-preload-618697 status is now: NodeReady
	  Normal  RegisteredNode           85s                  node-controller  Node test-preload-618697 event: Registered Node test-preload-618697 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node test-preload-618697 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node test-preload-618697 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node test-preload-618697 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                   node-controller  Node test-preload-618697 event: Registered Node test-preload-618697 in Controller
	
	
	==> dmesg <==
	[Apr16 00:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052142] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040392] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.575377] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.658676] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.470691] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.115453] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.061319] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059670] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.172403] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.108098] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.278504] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[ +13.245504] systemd-fstab-generator[944]: Ignoring "noauto" option for root device
	[  +0.066121] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.003187] systemd-fstab-generator[1074]: Ignoring "noauto" option for root device
	[  +5.943954] kauditd_printk_skb: 105 callbacks suppressed
	[  +2.628751] systemd-fstab-generator[1706]: Ignoring "noauto" option for root device
	[  +5.670509] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [6e1794a03855b004fc3e11ae0b1a948a251cc0a6b9098cd377614fb728db5030] <==
	{"level":"info","ts":"2024-04-16T00:41:40.624Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"de9917ec5c740094","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-04-16T00:41:40.625Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-04-16T00:41:40.625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 switched to configuration voters=(16039877851787559060)"}
	{"level":"info","ts":"2024-04-16T00:41:40.625Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6193f7f4ee516b71","local-member-id":"de9917ec5c740094","added-peer-id":"de9917ec5c740094","added-peer-peer-urls":["https://192.168.39.234:2380"]}
	{"level":"info","ts":"2024-04-16T00:41:40.625Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6193f7f4ee516b71","local-member-id":"de9917ec5c740094","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T00:41:40.625Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T00:41:40.630Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-16T00:41:40.637Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"de9917ec5c740094","initial-advertise-peer-urls":["https://192.168.39.234:2380"],"listen-peer-urls":["https://192.168.39.234:2380"],"advertise-client-urls":["https://192.168.39.234:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.234:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T00:41:40.637Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T00:41:40.634Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.234:2380"}
	{"level":"info","ts":"2024-04-16T00:41:40.637Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.234:2380"}
	{"level":"info","ts":"2024-04-16T00:41:42.464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-16T00:41:42.464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-16T00:41:42.464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 received MsgPreVoteResp from de9917ec5c740094 at term 2"}
	{"level":"info","ts":"2024-04-16T00:41:42.464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 became candidate at term 3"}
	{"level":"info","ts":"2024-04-16T00:41:42.464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 received MsgVoteResp from de9917ec5c740094 at term 3"}
	{"level":"info","ts":"2024-04-16T00:41:42.464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"de9917ec5c740094 became leader at term 3"}
	{"level":"info","ts":"2024-04-16T00:41:42.464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: de9917ec5c740094 elected leader de9917ec5c740094 at term 3"}
	{"level":"info","ts":"2024-04-16T00:41:42.466Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"de9917ec5c740094","local-member-attributes":"{Name:test-preload-618697 ClientURLs:[https://192.168.39.234:2379]}","request-path":"/0/members/de9917ec5c740094/attributes","cluster-id":"6193f7f4ee516b71","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T00:41:42.466Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T00:41:42.467Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T00:41:42.468Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T00:41:42.468Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.234:2379"}
	{"level":"info","ts":"2024-04-16T00:41:42.468Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T00:41:42.469Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 00:42:01 up 0 min,  0 users,  load average: 0.41, 0.12, 0.04
	Linux test-preload-618697 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ae922470d5f3eab28262309ed28f6d59a0a3fdc2200999ac0a1561cb18661076] <==
	I0416 00:41:44.929398       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0416 00:41:44.929420       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0416 00:41:44.929600       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0416 00:41:44.929635       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0416 00:41:44.929645       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0416 00:41:44.929689       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0416 00:41:44.931160       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0416 00:41:44.952335       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0416 00:41:44.982829       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 00:41:44.996883       1 cache.go:39] Caches are synced for autoregister controller
	I0416 00:41:45.003572       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0416 00:41:45.003987       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0416 00:41:45.004523       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0416 00:41:45.046347       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0416 00:41:45.056467       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 00:41:45.497164       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0416 00:41:45.870011       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 00:41:46.741087       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0416 00:41:46.754905       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0416 00:41:46.807086       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0416 00:41:46.828189       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 00:41:46.838237       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0416 00:41:46.880101       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0416 00:41:57.624604       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0416 00:41:57.676907       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [25278389368e457bb2538598f1c67e4a849a4ed083e93d4db0fbc63ffd910d83] <==
	I0416 00:41:57.473574       1 shared_informer.go:262] Caches are synced for taint
	I0416 00:41:57.473827       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0416 00:41:57.473922       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-618697. Assuming now as a timestamp.
	I0416 00:41:57.473945       1 shared_informer.go:262] Caches are synced for namespace
	I0416 00:41:57.474006       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0416 00:41:57.474682       1 event.go:294] "Event occurred" object="test-preload-618697" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-618697 event: Registered Node test-preload-618697 in Controller"
	I0416 00:41:57.474800       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0416 00:41:57.478669       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0416 00:41:57.484389       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0416 00:41:57.485187       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0416 00:41:57.485335       1 shared_informer.go:262] Caches are synced for PVC protection
	I0416 00:41:57.485204       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0416 00:41:57.485383       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0416 00:41:57.488401       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0416 00:41:57.497465       1 shared_informer.go:262] Caches are synced for endpoint
	I0416 00:41:57.525450       1 shared_informer.go:262] Caches are synced for stateful set
	I0416 00:41:57.572295       1 shared_informer.go:262] Caches are synced for ephemeral
	I0416 00:41:57.575727       1 shared_informer.go:262] Caches are synced for expand
	I0416 00:41:57.583367       1 shared_informer.go:262] Caches are synced for persistent volume
	I0416 00:41:57.584082       1 shared_informer.go:262] Caches are synced for attach detach
	I0416 00:41:57.700059       1 shared_informer.go:262] Caches are synced for resource quota
	I0416 00:41:57.708432       1 shared_informer.go:262] Caches are synced for resource quota
	I0416 00:41:58.080724       1 shared_informer.go:262] Caches are synced for garbage collector
	I0416 00:41:58.081510       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0416 00:41:58.118629       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [60c75706ec72bd75b03c873aa08360b72c82e82eb7c843a25f15fa1eafe74119] <==
	I0416 00:41:46.781980       1 node.go:163] Successfully retrieved node IP: 192.168.39.234
	I0416 00:41:46.782466       1 server_others.go:138] "Detected node IP" address="192.168.39.234"
	I0416 00:41:46.782601       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0416 00:41:46.866610       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0416 00:41:46.866653       1 server_others.go:206] "Using iptables Proxier"
	I0416 00:41:46.867792       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0416 00:41:46.868730       1 server.go:661] "Version info" version="v1.24.4"
	I0416 00:41:46.868816       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 00:41:46.872414       1 config.go:317] "Starting service config controller"
	I0416 00:41:46.872446       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0416 00:41:46.872468       1 config.go:226] "Starting endpoint slice config controller"
	I0416 00:41:46.872472       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0416 00:41:46.876089       1 config.go:444] "Starting node config controller"
	I0416 00:41:46.876117       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0416 00:41:46.972707       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0416 00:41:46.972768       1 shared_informer.go:262] Caches are synced for service config
	I0416 00:41:46.976475       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [5696f4fd05af7c1a1d7697e99df2de404d84081b46c2f69381b0064d357cef1b] <==
	I0416 00:41:41.582694       1 serving.go:348] Generated self-signed cert in-memory
	W0416 00:41:44.958450       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0416 00:41:44.958881       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 00:41:44.959007       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0416 00:41:44.959036       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0416 00:41:44.984444       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0416 00:41:44.984511       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 00:41:44.987510       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0416 00:41:44.987674       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0416 00:41:44.987726       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 00:41:44.988217       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0416 00:41:45.088929       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 00:41:45 test-preload-618697 kubelet[1081]: I0416 00:41:45.481323    1081 apiserver.go:52] "Watching apiserver"
	Apr 16 00:41:45 test-preload-618697 kubelet[1081]: I0416 00:41:45.485523    1081 topology_manager.go:200] "Topology Admit Handler"
	Apr 16 00:41:45 test-preload-618697 kubelet[1081]: I0416 00:41:45.485612    1081 topology_manager.go:200] "Topology Admit Handler"
	Apr 16 00:41:45 test-preload-618697 kubelet[1081]: I0416 00:41:45.485658    1081 topology_manager.go:200] "Topology Admit Handler"
	Apr 16 00:41:45 test-preload-618697 kubelet[1081]: I0416 00:41:45.485688    1081 topology_manager.go:200] "Topology Admit Handler"
	Apr 16 00:41:45 test-preload-618697 kubelet[1081]: E0416 00:41:45.491469    1081 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-2lxbc" podUID=487b3379-ed23-4c7d-878e-7b8dd8a622ce
	Apr 16 00:41:45 test-preload-618697 kubelet[1081]: I0416 00:41:45.564017    1081 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3f3ab60d-ec15-4898-8025-6e1000951d59-kube-proxy\") pod \"kube-proxy-8qpfk\" (UID: \"3f3ab60d-ec15-4898-8025-6e1000951d59\") " pod="kube-system/kube-proxy-8qpfk"
	Apr 16 00:41:45 test-preload-618697 kubelet[1081]: I0416 00:41:45.564489    1081 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7529p\" (UniqueName: \"kubernetes.io/projected/3f3ab60d-ec15-4898-8025-6e1000951d59-kube-api-access-7529p\") pod \"kube-proxy-8qpfk\" (UID: \"3f3ab60d-ec15-4898-8025-6e1000951d59\") " pod="kube-system/kube-proxy-8qpfk"
	Apr 16 00:41:45 test-preload-618697 kubelet[1081]: I0416 00:41:45.564568    1081 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f3ab60d-ec15-4898-8025-6e1000951d59-xtables-lock\") pod \"kube-proxy-8qpfk\" (UID: \"3f3ab60d-ec15-4898-8025-6e1000951d59\") " pod="kube-system/kube-proxy-8qpfk"
	Apr 16 00:41:45 test-preload-618697 kubelet[1081]: I0416 00:41:45.564617    1081 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/487b3379-ed23-4c7d-878e-7b8dd8a622ce-config-volume\") pod \"coredns-6d4b75cb6d-2lxbc\" (UID: \"487b3379-ed23-4c7d-878e-7b8dd8a622ce\") " pod="kube-system/coredns-6d4b75cb6d-2lxbc"
	Apr 16 00:41:45 test-preload-618697 kubelet[1081]: I0416 00:41:45.564670    1081 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/113db0bf-89e8-4fe0-93d6-f5e024960c49-tmp\") pod \"storage-provisioner\" (UID: \"113db0bf-89e8-4fe0-93d6-f5e024960c49\") " pod="kube-system/storage-provisioner"
	Apr 16 00:41:45 test-preload-618697 kubelet[1081]: I0416 00:41:45.564721    1081 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8dx5\" (UniqueName: \"kubernetes.io/projected/113db0bf-89e8-4fe0-93d6-f5e024960c49-kube-api-access-j8dx5\") pod \"storage-provisioner\" (UID: \"113db0bf-89e8-4fe0-93d6-f5e024960c49\") " pod="kube-system/storage-provisioner"
	Apr 16 00:41:45 test-preload-618697 kubelet[1081]: I0416 00:41:45.564858    1081 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f3ab60d-ec15-4898-8025-6e1000951d59-lib-modules\") pod \"kube-proxy-8qpfk\" (UID: \"3f3ab60d-ec15-4898-8025-6e1000951d59\") " pod="kube-system/kube-proxy-8qpfk"
	Apr 16 00:41:45 test-preload-618697 kubelet[1081]: I0416 00:41:45.564931    1081 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xln2t\" (UniqueName: \"kubernetes.io/projected/487b3379-ed23-4c7d-878e-7b8dd8a622ce-kube-api-access-xln2t\") pod \"coredns-6d4b75cb6d-2lxbc\" (UID: \"487b3379-ed23-4c7d-878e-7b8dd8a622ce\") " pod="kube-system/coredns-6d4b75cb6d-2lxbc"
	Apr 16 00:41:45 test-preload-618697 kubelet[1081]: I0416 00:41:45.564970    1081 reconciler.go:159] "Reconciler: start to sync state"
	Apr 16 00:41:45 test-preload-618697 kubelet[1081]: E0416 00:41:45.669076    1081 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 16 00:41:45 test-preload-618697 kubelet[1081]: E0416 00:41:45.669303    1081 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/487b3379-ed23-4c7d-878e-7b8dd8a622ce-config-volume podName:487b3379-ed23-4c7d-878e-7b8dd8a622ce nodeName:}" failed. No retries permitted until 2024-04-16 00:41:46.169192751 +0000 UTC m=+6.825628146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/487b3379-ed23-4c7d-878e-7b8dd8a622ce-config-volume") pod "coredns-6d4b75cb6d-2lxbc" (UID: "487b3379-ed23-4c7d-878e-7b8dd8a622ce") : object "kube-system"/"coredns" not registered
	Apr 16 00:41:46 test-preload-618697 kubelet[1081]: E0416 00:41:46.173009    1081 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 16 00:41:46 test-preload-618697 kubelet[1081]: E0416 00:41:46.173088    1081 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/487b3379-ed23-4c7d-878e-7b8dd8a622ce-config-volume podName:487b3379-ed23-4c7d-878e-7b8dd8a622ce nodeName:}" failed. No retries permitted until 2024-04-16 00:41:47.173073829 +0000 UTC m=+7.829509217 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/487b3379-ed23-4c7d-878e-7b8dd8a622ce-config-volume") pod "coredns-6d4b75cb6d-2lxbc" (UID: "487b3379-ed23-4c7d-878e-7b8dd8a622ce") : object "kube-system"/"coredns" not registered
	Apr 16 00:41:47 test-preload-618697 kubelet[1081]: E0416 00:41:47.179579    1081 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 16 00:41:47 test-preload-618697 kubelet[1081]: E0416 00:41:47.179673    1081 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/487b3379-ed23-4c7d-878e-7b8dd8a622ce-config-volume podName:487b3379-ed23-4c7d-878e-7b8dd8a622ce nodeName:}" failed. No retries permitted until 2024-04-16 00:41:49.179657668 +0000 UTC m=+9.836093053 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/487b3379-ed23-4c7d-878e-7b8dd8a622ce-config-volume") pod "coredns-6d4b75cb6d-2lxbc" (UID: "487b3379-ed23-4c7d-878e-7b8dd8a622ce") : object "kube-system"/"coredns" not registered
	Apr 16 00:41:47 test-preload-618697 kubelet[1081]: E0416 00:41:47.584549    1081 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-2lxbc" podUID=487b3379-ed23-4c7d-878e-7b8dd8a622ce
	Apr 16 00:41:47 test-preload-618697 kubelet[1081]: I0416 00:41:47.589124    1081 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=c3291d5f-169f-4b59-9fb0-a6bd12f87237 path="/var/lib/kubelet/pods/c3291d5f-169f-4b59-9fb0-a6bd12f87237/volumes"
	Apr 16 00:41:49 test-preload-618697 kubelet[1081]: E0416 00:41:49.197598    1081 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Apr 16 00:41:49 test-preload-618697 kubelet[1081]: E0416 00:41:49.197693    1081 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/487b3379-ed23-4c7d-878e-7b8dd8a622ce-config-volume podName:487b3379-ed23-4c7d-878e-7b8dd8a622ce nodeName:}" failed. No retries permitted until 2024-04-16 00:41:53.197678141 +0000 UTC m=+13.854113515 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/487b3379-ed23-4c7d-878e-7b8dd8a622ce-config-volume") pod "coredns-6d4b75cb6d-2lxbc" (UID: "487b3379-ed23-4c7d-878e-7b8dd8a622ce") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [16f11fbe01bcfee286fc933a3eb118052aba4ca190c77e2d646b68063521f7bd] <==
	I0416 00:41:46.280192       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-618697 -n test-preload-618697
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-618697 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-618697" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-618697
--- FAIL: TestPreload (193.56s)

                                                
                                    
x
+
TestKubernetesUpgrade (390.2s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-497059 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-497059 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m1.662698602s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-497059] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-497059" primary control-plane node in "kubernetes-upgrade-497059" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:44:01.674654   50071 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:44:01.674760   50071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:44:01.674771   50071 out.go:304] Setting ErrFile to fd 2...
	I0416 00:44:01.674778   50071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:44:01.674977   50071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:44:01.675516   50071 out.go:298] Setting JSON to false
	I0416 00:44:01.676385   50071 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5186,"bootTime":1713223056,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 00:44:01.676454   50071 start.go:139] virtualization: kvm guest
	I0416 00:44:01.678121   50071 out.go:177] * [kubernetes-upgrade-497059] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 00:44:01.680932   50071 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 00:44:01.679781   50071 notify.go:220] Checking for updates...
	I0416 00:44:01.683610   50071 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 00:44:01.686011   50071 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 00:44:01.687348   50071 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:44:01.688598   50071 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 00:44:01.689935   50071 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 00:44:01.691274   50071 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 00:44:01.732274   50071 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 00:44:01.733812   50071 start.go:297] selected driver: kvm2
	I0416 00:44:01.733830   50071 start.go:901] validating driver "kvm2" against <nil>
	I0416 00:44:01.733842   50071 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 00:44:01.734818   50071 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:44:01.749952   50071 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 00:44:01.765464   50071 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 00:44:01.765519   50071 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 00:44:01.765710   50071 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0416 00:44:01.765773   50071 cni.go:84] Creating CNI manager for ""
	I0416 00:44:01.765786   50071 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:44:01.765793   50071 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0416 00:44:01.765841   50071 start.go:340] cluster config:
	{Name:kubernetes-upgrade-497059 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-497059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:44:01.765949   50071 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:44:01.767614   50071 out.go:177] * Starting "kubernetes-upgrade-497059" primary control-plane node in "kubernetes-upgrade-497059" cluster
	I0416 00:44:01.768860   50071 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 00:44:01.768897   50071 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0416 00:44:01.768910   50071 cache.go:56] Caching tarball of preloaded images
	I0416 00:44:01.769016   50071 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 00:44:01.769028   50071 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0416 00:44:01.769346   50071 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/config.json ...
	I0416 00:44:01.769374   50071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/config.json: {Name:mk2a22ef9af8f99cfc8264b927172e052803979c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:44:01.769529   50071 start.go:360] acquireMachinesLock for kubernetes-upgrade-497059: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:44:28.938216   50071 start.go:364] duration metric: took 27.168664297s to acquireMachinesLock for "kubernetes-upgrade-497059"
	I0416 00:44:28.938336   50071 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-497059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-497059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 00:44:28.938452   50071 start.go:125] createHost starting for "" (driver="kvm2")
	I0416 00:44:28.940592   50071 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 00:44:28.940784   50071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:44:28.940831   50071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:44:28.957946   50071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41935
	I0416 00:44:28.958319   50071 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:44:28.958925   50071 main.go:141] libmachine: Using API Version  1
	I0416 00:44:28.958961   50071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:44:28.959348   50071 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:44:28.959533   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetMachineName
	I0416 00:44:28.959696   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .DriverName
	I0416 00:44:28.959865   50071 start.go:159] libmachine.API.Create for "kubernetes-upgrade-497059" (driver="kvm2")
	I0416 00:44:28.959889   50071 client.go:168] LocalClient.Create starting
	I0416 00:44:28.959920   50071 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem
	I0416 00:44:28.959954   50071 main.go:141] libmachine: Decoding PEM data...
	I0416 00:44:28.959971   50071 main.go:141] libmachine: Parsing certificate...
	I0416 00:44:28.960032   50071 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem
	I0416 00:44:28.960057   50071 main.go:141] libmachine: Decoding PEM data...
	I0416 00:44:28.960073   50071 main.go:141] libmachine: Parsing certificate...
	I0416 00:44:28.960097   50071 main.go:141] libmachine: Running pre-create checks...
	I0416 00:44:28.960136   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .PreCreateCheck
	I0416 00:44:28.960568   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetConfigRaw
	I0416 00:44:28.961023   50071 main.go:141] libmachine: Creating machine...
	I0416 00:44:28.961047   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .Create
	I0416 00:44:28.961183   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Creating KVM machine...
	I0416 00:44:28.962523   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | found existing default KVM network
	I0416 00:44:28.963279   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:28.963149   50389 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:91:ca:b4} reservation:<nil>}
	I0416 00:44:28.963917   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:28.963837   50389 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000252330}
	I0416 00:44:28.963942   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | created network xml: 
	I0416 00:44:28.963955   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | <network>
	I0416 00:44:28.963965   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG |   <name>mk-kubernetes-upgrade-497059</name>
	I0416 00:44:28.963975   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG |   <dns enable='no'/>
	I0416 00:44:28.963986   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG |   
	I0416 00:44:28.963998   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0416 00:44:28.964011   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG |     <dhcp>
	I0416 00:44:28.964027   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0416 00:44:28.964038   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG |     </dhcp>
	I0416 00:44:28.964049   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG |   </ip>
	I0416 00:44:28.964061   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG |   
	I0416 00:44:28.964074   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | </network>
	I0416 00:44:28.964103   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | 
	I0416 00:44:28.969900   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | trying to create private KVM network mk-kubernetes-upgrade-497059 192.168.50.0/24...
	I0416 00:44:29.035318   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | private KVM network mk-kubernetes-upgrade-497059 192.168.50.0/24 created
	I0416 00:44:29.035355   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:29.035290   50389 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:44:29.035382   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Setting up store path in /home/jenkins/minikube-integration/18647-7542/.minikube/machines/kubernetes-upgrade-497059 ...
	I0416 00:44:29.035405   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Building disk image from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso
	I0416 00:44:29.035469   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Downloading /home/jenkins/minikube-integration/18647-7542/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0416 00:44:29.258598   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:29.258398   50389 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/kubernetes-upgrade-497059/id_rsa...
	I0416 00:44:29.314040   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:29.313937   50389 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/kubernetes-upgrade-497059/kubernetes-upgrade-497059.rawdisk...
	I0416 00:44:29.314074   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | Writing magic tar header
	I0416 00:44:29.314098   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | Writing SSH key tar header
	I0416 00:44:29.314118   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:29.314065   50389 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/kubernetes-upgrade-497059 ...
	I0416 00:44:29.314270   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/kubernetes-upgrade-497059
	I0416 00:44:29.314318   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines
	I0416 00:44:29.314333   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/kubernetes-upgrade-497059 (perms=drwx------)
	I0416 00:44:29.314347   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines (perms=drwxr-xr-x)
	I0416 00:44:29.314361   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube (perms=drwxr-xr-x)
	I0416 00:44:29.314378   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542 (perms=drwxrwxr-x)
	I0416 00:44:29.314402   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 00:44:29.314412   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:44:29.314422   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 00:44:29.314437   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Creating domain...
	I0416 00:44:29.314454   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542
	I0416 00:44:29.314466   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 00:44:29.314478   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | Checking permissions on dir: /home/jenkins
	I0416 00:44:29.314487   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | Checking permissions on dir: /home
	I0416 00:44:29.314497   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | Skipping /home - not owner
	I0416 00:44:29.315697   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) define libvirt domain using xml: 
	I0416 00:44:29.315723   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) <domain type='kvm'>
	I0416 00:44:29.315739   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)   <name>kubernetes-upgrade-497059</name>
	I0416 00:44:29.315748   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)   <memory unit='MiB'>2200</memory>
	I0416 00:44:29.315761   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)   <vcpu>2</vcpu>
	I0416 00:44:29.315772   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)   <features>
	I0416 00:44:29.315780   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     <acpi/>
	I0416 00:44:29.315795   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     <apic/>
	I0416 00:44:29.315811   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     <pae/>
	I0416 00:44:29.315822   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     
	I0416 00:44:29.315832   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)   </features>
	I0416 00:44:29.315843   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)   <cpu mode='host-passthrough'>
	I0416 00:44:29.315855   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)   
	I0416 00:44:29.315865   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)   </cpu>
	I0416 00:44:29.315873   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)   <os>
	I0416 00:44:29.315889   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     <type>hvm</type>
	I0416 00:44:29.315902   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     <boot dev='cdrom'/>
	I0416 00:44:29.315913   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     <boot dev='hd'/>
	I0416 00:44:29.315925   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     <bootmenu enable='no'/>
	I0416 00:44:29.315934   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)   </os>
	I0416 00:44:29.315943   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)   <devices>
	I0416 00:44:29.315953   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     <disk type='file' device='cdrom'>
	I0416 00:44:29.315974   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/kubernetes-upgrade-497059/boot2docker.iso'/>
	I0416 00:44:29.315989   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)       <target dev='hdc' bus='scsi'/>
	I0416 00:44:29.316001   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)       <readonly/>
	I0416 00:44:29.316011   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     </disk>
	I0416 00:44:29.316024   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     <disk type='file' device='disk'>
	I0416 00:44:29.316037   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 00:44:29.316052   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/kubernetes-upgrade-497059/kubernetes-upgrade-497059.rawdisk'/>
	I0416 00:44:29.316064   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)       <target dev='hda' bus='virtio'/>
	I0416 00:44:29.316089   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     </disk>
	I0416 00:44:29.316117   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     <interface type='network'>
	I0416 00:44:29.316151   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)       <source network='mk-kubernetes-upgrade-497059'/>
	I0416 00:44:29.316177   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)       <model type='virtio'/>
	I0416 00:44:29.316192   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     </interface>
	I0416 00:44:29.316204   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     <interface type='network'>
	I0416 00:44:29.316216   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)       <source network='default'/>
	I0416 00:44:29.316227   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)       <model type='virtio'/>
	I0416 00:44:29.316240   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     </interface>
	I0416 00:44:29.316255   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     <serial type='pty'>
	I0416 00:44:29.316268   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)       <target port='0'/>
	I0416 00:44:29.316278   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     </serial>
	I0416 00:44:29.316287   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     <console type='pty'>
	I0416 00:44:29.316299   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)       <target type='serial' port='0'/>
	I0416 00:44:29.316311   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     </console>
	I0416 00:44:29.316339   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     <rng model='virtio'>
	I0416 00:44:29.316352   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)       <backend model='random'>/dev/random</backend>
	I0416 00:44:29.316360   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     </rng>
	I0416 00:44:29.316373   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     
	I0416 00:44:29.316383   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)     
	I0416 00:44:29.316394   50071 main.go:141] libmachine: (kubernetes-upgrade-497059)   </devices>
	I0416 00:44:29.316404   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) </domain>
	I0416 00:44:29.316415   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) 
	I0416 00:44:29.320807   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:00:e9:ce in network default
	I0416 00:44:29.321609   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Ensuring networks are active...
	I0416 00:44:29.321639   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:29.322372   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Ensuring network default is active
	I0416 00:44:29.322765   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Ensuring network mk-kubernetes-upgrade-497059 is active
	I0416 00:44:29.323252   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Getting domain xml...
	I0416 00:44:29.324112   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Creating domain...
	I0416 00:44:30.528553   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Waiting to get IP...
	I0416 00:44:30.529209   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:30.529559   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | unable to find current IP address of domain kubernetes-upgrade-497059 in network mk-kubernetes-upgrade-497059
	I0416 00:44:30.529617   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:30.529551   50389 retry.go:31] will retry after 228.103667ms: waiting for machine to come up
	I0416 00:44:30.759384   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:30.759691   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | unable to find current IP address of domain kubernetes-upgrade-497059 in network mk-kubernetes-upgrade-497059
	I0416 00:44:30.759723   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:30.759641   50389 retry.go:31] will retry after 285.855945ms: waiting for machine to come up
	I0416 00:44:31.047191   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:31.047755   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | unable to find current IP address of domain kubernetes-upgrade-497059 in network mk-kubernetes-upgrade-497059
	I0416 00:44:31.047804   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:31.047716   50389 retry.go:31] will retry after 384.054335ms: waiting for machine to come up
	I0416 00:44:31.433320   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:31.433830   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | unable to find current IP address of domain kubernetes-upgrade-497059 in network mk-kubernetes-upgrade-497059
	I0416 00:44:31.433877   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:31.433729   50389 retry.go:31] will retry after 530.825146ms: waiting for machine to come up
	I0416 00:44:31.966833   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:31.967414   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | unable to find current IP address of domain kubernetes-upgrade-497059 in network mk-kubernetes-upgrade-497059
	I0416 00:44:31.967443   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:31.967371   50389 retry.go:31] will retry after 719.224007ms: waiting for machine to come up
	I0416 00:44:32.688373   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:32.688763   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | unable to find current IP address of domain kubernetes-upgrade-497059 in network mk-kubernetes-upgrade-497059
	I0416 00:44:32.688792   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:32.688717   50389 retry.go:31] will retry after 634.376056ms: waiting for machine to come up
	I0416 00:44:33.324856   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:33.325402   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | unable to find current IP address of domain kubernetes-upgrade-497059 in network mk-kubernetes-upgrade-497059
	I0416 00:44:33.325436   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:33.325355   50389 retry.go:31] will retry after 923.620272ms: waiting for machine to come up
	I0416 00:44:34.250845   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:34.251280   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | unable to find current IP address of domain kubernetes-upgrade-497059 in network mk-kubernetes-upgrade-497059
	I0416 00:44:34.251344   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:34.251245   50389 retry.go:31] will retry after 979.035095ms: waiting for machine to come up
	I0416 00:44:35.231613   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:35.232021   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | unable to find current IP address of domain kubernetes-upgrade-497059 in network mk-kubernetes-upgrade-497059
	I0416 00:44:35.232053   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:35.231970   50389 retry.go:31] will retry after 1.327976194s: waiting for machine to come up
	I0416 00:44:36.561379   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:36.561862   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | unable to find current IP address of domain kubernetes-upgrade-497059 in network mk-kubernetes-upgrade-497059
	I0416 00:44:36.561896   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:36.561802   50389 retry.go:31] will retry after 1.579659998s: waiting for machine to come up
	I0416 00:44:38.143579   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:38.144044   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | unable to find current IP address of domain kubernetes-upgrade-497059 in network mk-kubernetes-upgrade-497059
	I0416 00:44:38.144073   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:38.143999   50389 retry.go:31] will retry after 2.899951663s: waiting for machine to come up
	I0416 00:44:41.046445   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:41.046913   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | unable to find current IP address of domain kubernetes-upgrade-497059 in network mk-kubernetes-upgrade-497059
	I0416 00:44:41.046959   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:41.046897   50389 retry.go:31] will retry after 2.18997376s: waiting for machine to come up
	I0416 00:44:43.238523   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:43.238998   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | unable to find current IP address of domain kubernetes-upgrade-497059 in network mk-kubernetes-upgrade-497059
	I0416 00:44:43.239029   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:43.238955   50389 retry.go:31] will retry after 3.919222457s: waiting for machine to come up
	I0416 00:44:47.162173   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:47.162567   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | unable to find current IP address of domain kubernetes-upgrade-497059 in network mk-kubernetes-upgrade-497059
	I0416 00:44:47.162595   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | I0416 00:44:47.162512   50389 retry.go:31] will retry after 5.444047939s: waiting for machine to come up
	I0416 00:44:52.611276   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:52.611802   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Found IP for machine: 192.168.50.223
	I0416 00:44:52.611833   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has current primary IP address 192.168.50.223 and MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:52.611842   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Reserving static IP address...
	I0416 00:44:52.612210   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-497059", mac: "52:54:00:6b:72:5a", ip: "192.168.50.223"} in network mk-kubernetes-upgrade-497059
	I0416 00:44:52.680735   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | Getting to WaitForSSH function...
	I0416 00:44:52.680768   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Reserved static IP address: 192.168.50.223
	I0416 00:44:52.680782   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Waiting for SSH to be available...
	I0416 00:44:52.683305   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:52.683618   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:6b:72:5a", ip: ""} in network mk-kubernetes-upgrade-497059
	I0416 00:44:52.683648   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-497059 interface with MAC address 52:54:00:6b:72:5a
	I0416 00:44:52.683864   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | Using SSH client type: external
	I0416 00:44:52.683886   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/kubernetes-upgrade-497059/id_rsa (-rw-------)
	I0416 00:44:52.683911   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/kubernetes-upgrade-497059/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 00:44:52.683935   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | About to run SSH command:
	I0416 00:44:52.683949   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | exit 0
	I0416 00:44:52.687323   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | SSH cmd err, output: exit status 255: 
	I0416 00:44:52.687343   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0416 00:44:52.687355   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | command : exit 0
	I0416 00:44:52.687364   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | err     : exit status 255
	I0416 00:44:52.687381   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | output  : 
	I0416 00:44:55.688696   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | Getting to WaitForSSH function...
	I0416 00:44:55.691025   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:55.691484   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:72:5a", ip: ""} in network mk-kubernetes-upgrade-497059: {Iface:virbr2 ExpiryTime:2024-04-16 01:44:44 +0000 UTC Type:0 Mac:52:54:00:6b:72:5a Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:kubernetes-upgrade-497059 Clientid:01:52:54:00:6b:72:5a}
	I0416 00:44:55.691517   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined IP address 192.168.50.223 and MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:55.691625   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | Using SSH client type: external
	I0416 00:44:55.691654   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/kubernetes-upgrade-497059/id_rsa (-rw-------)
	I0416 00:44:55.691694   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.223 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/kubernetes-upgrade-497059/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 00:44:55.691714   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | About to run SSH command:
	I0416 00:44:55.691731   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | exit 0
	I0416 00:44:55.813020   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | SSH cmd err, output: <nil>: 
	I0416 00:44:55.813302   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) KVM machine creation complete!
	I0416 00:44:55.813613   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetConfigRaw
	I0416 00:44:55.814178   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .DriverName
	I0416 00:44:55.814398   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .DriverName
	I0416 00:44:55.814600   50071 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0416 00:44:55.814618   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetState
	I0416 00:44:55.816113   50071 main.go:141] libmachine: Detecting operating system of created instance...
	I0416 00:44:55.816128   50071 main.go:141] libmachine: Waiting for SSH to be available...
	I0416 00:44:55.816135   50071 main.go:141] libmachine: Getting to WaitForSSH function...
	I0416 00:44:55.816142   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHHostname
	I0416 00:44:55.818228   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:55.818540   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:72:5a", ip: ""} in network mk-kubernetes-upgrade-497059: {Iface:virbr2 ExpiryTime:2024-04-16 01:44:44 +0000 UTC Type:0 Mac:52:54:00:6b:72:5a Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:kubernetes-upgrade-497059 Clientid:01:52:54:00:6b:72:5a}
	I0416 00:44:55.818576   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined IP address 192.168.50.223 and MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:55.818654   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHPort
	I0416 00:44:55.818822   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHKeyPath
	I0416 00:44:55.818981   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHKeyPath
	I0416 00:44:55.819113   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHUsername
	I0416 00:44:55.819236   50071 main.go:141] libmachine: Using SSH client type: native
	I0416 00:44:55.819434   50071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I0416 00:44:55.819448   50071 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0416 00:44:55.916471   50071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:44:55.916498   50071 main.go:141] libmachine: Detecting the provisioner...
	I0416 00:44:55.916509   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHHostname
	I0416 00:44:55.920442   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:55.920852   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:72:5a", ip: ""} in network mk-kubernetes-upgrade-497059: {Iface:virbr2 ExpiryTime:2024-04-16 01:44:44 +0000 UTC Type:0 Mac:52:54:00:6b:72:5a Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:kubernetes-upgrade-497059 Clientid:01:52:54:00:6b:72:5a}
	I0416 00:44:55.920894   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined IP address 192.168.50.223 and MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:55.921065   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHPort
	I0416 00:44:55.921282   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHKeyPath
	I0416 00:44:55.921431   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHKeyPath
	I0416 00:44:55.921562   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHUsername
	I0416 00:44:55.921740   50071 main.go:141] libmachine: Using SSH client type: native
	I0416 00:44:55.921919   50071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I0416 00:44:55.921931   50071 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0416 00:44:56.022292   50071 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0416 00:44:56.022381   50071 main.go:141] libmachine: found compatible host: buildroot
	I0416 00:44:56.022395   50071 main.go:141] libmachine: Provisioning with buildroot...
	I0416 00:44:56.022406   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetMachineName
	I0416 00:44:56.022646   50071 buildroot.go:166] provisioning hostname "kubernetes-upgrade-497059"
	I0416 00:44:56.022664   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetMachineName
	I0416 00:44:56.022849   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHHostname
	I0416 00:44:56.025288   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:56.025631   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:72:5a", ip: ""} in network mk-kubernetes-upgrade-497059: {Iface:virbr2 ExpiryTime:2024-04-16 01:44:44 +0000 UTC Type:0 Mac:52:54:00:6b:72:5a Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:kubernetes-upgrade-497059 Clientid:01:52:54:00:6b:72:5a}
	I0416 00:44:56.025660   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined IP address 192.168.50.223 and MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:56.025842   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHPort
	I0416 00:44:56.026060   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHKeyPath
	I0416 00:44:56.026260   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHKeyPath
	I0416 00:44:56.026424   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHUsername
	I0416 00:44:56.026602   50071 main.go:141] libmachine: Using SSH client type: native
	I0416 00:44:56.026819   50071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I0416 00:44:56.026834   50071 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-497059 && echo "kubernetes-upgrade-497059" | sudo tee /etc/hostname
	I0416 00:44:56.145954   50071 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-497059
	
	I0416 00:44:56.145989   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHHostname
	I0416 00:44:56.148928   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:56.149317   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:72:5a", ip: ""} in network mk-kubernetes-upgrade-497059: {Iface:virbr2 ExpiryTime:2024-04-16 01:44:44 +0000 UTC Type:0 Mac:52:54:00:6b:72:5a Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:kubernetes-upgrade-497059 Clientid:01:52:54:00:6b:72:5a}
	I0416 00:44:56.149372   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined IP address 192.168.50.223 and MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:56.149710   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHPort
	I0416 00:44:56.149911   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHKeyPath
	I0416 00:44:56.150104   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHKeyPath
	I0416 00:44:56.150301   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHUsername
	I0416 00:44:56.150493   50071 main.go:141] libmachine: Using SSH client type: native
	I0416 00:44:56.150676   50071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I0416 00:44:56.150699   50071 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-497059' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-497059/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-497059' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:44:56.257924   50071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:44:56.257951   50071 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:44:56.257968   50071 buildroot.go:174] setting up certificates
	I0416 00:44:56.257976   50071 provision.go:84] configureAuth start
	I0416 00:44:56.257984   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetMachineName
	I0416 00:44:56.258294   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetIP
	I0416 00:44:56.260891   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:56.261360   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:72:5a", ip: ""} in network mk-kubernetes-upgrade-497059: {Iface:virbr2 ExpiryTime:2024-04-16 01:44:44 +0000 UTC Type:0 Mac:52:54:00:6b:72:5a Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:kubernetes-upgrade-497059 Clientid:01:52:54:00:6b:72:5a}
	I0416 00:44:56.261388   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined IP address 192.168.50.223 and MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:56.261558   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHHostname
	I0416 00:44:56.263893   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:56.264190   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:72:5a", ip: ""} in network mk-kubernetes-upgrade-497059: {Iface:virbr2 ExpiryTime:2024-04-16 01:44:44 +0000 UTC Type:0 Mac:52:54:00:6b:72:5a Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:kubernetes-upgrade-497059 Clientid:01:52:54:00:6b:72:5a}
	I0416 00:44:56.264220   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined IP address 192.168.50.223 and MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:56.264317   50071 provision.go:143] copyHostCerts
	I0416 00:44:56.264368   50071 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:44:56.264384   50071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:44:56.264435   50071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:44:56.264526   50071 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:44:56.264534   50071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:44:56.264552   50071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:44:56.264618   50071 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:44:56.264625   50071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:44:56.264645   50071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:44:56.264700   50071 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-497059 san=[127.0.0.1 192.168.50.223 kubernetes-upgrade-497059 localhost minikube]
	I0416 00:44:56.436221   50071 provision.go:177] copyRemoteCerts
	I0416 00:44:56.436284   50071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:44:56.436307   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHHostname
	I0416 00:44:56.439087   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:56.439580   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:72:5a", ip: ""} in network mk-kubernetes-upgrade-497059: {Iface:virbr2 ExpiryTime:2024-04-16 01:44:44 +0000 UTC Type:0 Mac:52:54:00:6b:72:5a Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:kubernetes-upgrade-497059 Clientid:01:52:54:00:6b:72:5a}
	I0416 00:44:56.439625   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined IP address 192.168.50.223 and MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:56.439787   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHPort
	I0416 00:44:56.439988   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHKeyPath
	I0416 00:44:56.440168   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHUsername
	I0416 00:44:56.440286   50071 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/kubernetes-upgrade-497059/id_rsa Username:docker}
	I0416 00:44:56.520494   50071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:44:56.546499   50071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0416 00:44:56.572049   50071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 00:44:56.603186   50071 provision.go:87] duration metric: took 345.195736ms to configureAuth
	I0416 00:44:56.603216   50071 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:44:56.603377   50071 config.go:182] Loaded profile config "kubernetes-upgrade-497059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0416 00:44:56.603450   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHHostname
	I0416 00:44:56.606302   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:56.606639   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:72:5a", ip: ""} in network mk-kubernetes-upgrade-497059: {Iface:virbr2 ExpiryTime:2024-04-16 01:44:44 +0000 UTC Type:0 Mac:52:54:00:6b:72:5a Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:kubernetes-upgrade-497059 Clientid:01:52:54:00:6b:72:5a}
	I0416 00:44:56.606682   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined IP address 192.168.50.223 and MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:56.606906   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHPort
	I0416 00:44:56.607101   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHKeyPath
	I0416 00:44:56.607270   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHKeyPath
	I0416 00:44:56.607397   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHUsername
	I0416 00:44:56.607569   50071 main.go:141] libmachine: Using SSH client type: native
	I0416 00:44:56.607793   50071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I0416 00:44:56.607816   50071 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:44:56.866302   50071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:44:56.866329   50071 main.go:141] libmachine: Checking connection to Docker...
	I0416 00:44:56.866337   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetURL
	I0416 00:44:56.867441   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | Using libvirt version 6000000
	I0416 00:44:56.869744   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:56.870120   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:72:5a", ip: ""} in network mk-kubernetes-upgrade-497059: {Iface:virbr2 ExpiryTime:2024-04-16 01:44:44 +0000 UTC Type:0 Mac:52:54:00:6b:72:5a Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:kubernetes-upgrade-497059 Clientid:01:52:54:00:6b:72:5a}
	I0416 00:44:56.870151   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined IP address 192.168.50.223 and MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:56.870315   50071 main.go:141] libmachine: Docker is up and running!
	I0416 00:44:56.870332   50071 main.go:141] libmachine: Reticulating splines...
	I0416 00:44:56.870338   50071 client.go:171] duration metric: took 27.910443096s to LocalClient.Create
	I0416 00:44:56.870363   50071 start.go:167] duration metric: took 27.910498001s to libmachine.API.Create "kubernetes-upgrade-497059"
	I0416 00:44:56.870374   50071 start.go:293] postStartSetup for "kubernetes-upgrade-497059" (driver="kvm2")
	I0416 00:44:56.870390   50071 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:44:56.870413   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .DriverName
	I0416 00:44:56.870643   50071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:44:56.870671   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHHostname
	I0416 00:44:56.872835   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:56.873128   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:72:5a", ip: ""} in network mk-kubernetes-upgrade-497059: {Iface:virbr2 ExpiryTime:2024-04-16 01:44:44 +0000 UTC Type:0 Mac:52:54:00:6b:72:5a Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:kubernetes-upgrade-497059 Clientid:01:52:54:00:6b:72:5a}
	I0416 00:44:56.873178   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined IP address 192.168.50.223 and MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:56.873283   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHPort
	I0416 00:44:56.873503   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHKeyPath
	I0416 00:44:56.873663   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHUsername
	I0416 00:44:56.873818   50071 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/kubernetes-upgrade-497059/id_rsa Username:docker}
	I0416 00:44:56.952222   50071 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:44:56.959091   50071 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:44:56.959121   50071 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:44:56.959196   50071 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:44:56.959285   50071 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:44:56.959414   50071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:44:56.971162   50071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:44:56.998035   50071 start.go:296] duration metric: took 127.630698ms for postStartSetup
	I0416 00:44:56.998107   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetConfigRaw
	I0416 00:44:56.998627   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetIP
	I0416 00:44:57.001215   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:57.001582   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:72:5a", ip: ""} in network mk-kubernetes-upgrade-497059: {Iface:virbr2 ExpiryTime:2024-04-16 01:44:44 +0000 UTC Type:0 Mac:52:54:00:6b:72:5a Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:kubernetes-upgrade-497059 Clientid:01:52:54:00:6b:72:5a}
	I0416 00:44:57.001604   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined IP address 192.168.50.223 and MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:57.001871   50071 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/config.json ...
	I0416 00:44:57.002082   50071 start.go:128] duration metric: took 28.063616945s to createHost
	I0416 00:44:57.002110   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHHostname
	I0416 00:44:57.004264   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:57.004613   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:72:5a", ip: ""} in network mk-kubernetes-upgrade-497059: {Iface:virbr2 ExpiryTime:2024-04-16 01:44:44 +0000 UTC Type:0 Mac:52:54:00:6b:72:5a Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:kubernetes-upgrade-497059 Clientid:01:52:54:00:6b:72:5a}
	I0416 00:44:57.004653   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined IP address 192.168.50.223 and MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:57.004784   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHPort
	I0416 00:44:57.004950   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHKeyPath
	I0416 00:44:57.005127   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHKeyPath
	I0416 00:44:57.005288   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHUsername
	I0416 00:44:57.005525   50071 main.go:141] libmachine: Using SSH client type: native
	I0416 00:44:57.005683   50071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.223 22 <nil> <nil>}
	I0416 00:44:57.005694   50071 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0416 00:44:57.109863   50071 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713228297.086964077
	
	I0416 00:44:57.109889   50071 fix.go:216] guest clock: 1713228297.086964077
	I0416 00:44:57.109899   50071 fix.go:229] Guest: 2024-04-16 00:44:57.086964077 +0000 UTC Remote: 2024-04-16 00:44:57.002096601 +0000 UTC m=+55.397367254 (delta=84.867476ms)
	I0416 00:44:57.109920   50071 fix.go:200] guest clock delta is within tolerance: 84.867476ms
	I0416 00:44:57.109927   50071 start.go:83] releasing machines lock for "kubernetes-upgrade-497059", held for 28.171627311s
	I0416 00:44:57.109956   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .DriverName
	I0416 00:44:57.110275   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetIP
	I0416 00:44:57.113328   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:57.113715   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:72:5a", ip: ""} in network mk-kubernetes-upgrade-497059: {Iface:virbr2 ExpiryTime:2024-04-16 01:44:44 +0000 UTC Type:0 Mac:52:54:00:6b:72:5a Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:kubernetes-upgrade-497059 Clientid:01:52:54:00:6b:72:5a}
	I0416 00:44:57.113741   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined IP address 192.168.50.223 and MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:57.113904   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .DriverName
	I0416 00:44:57.114449   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .DriverName
	I0416 00:44:57.114640   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .DriverName
	I0416 00:44:57.114743   50071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:44:57.114786   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHHostname
	I0416 00:44:57.114827   50071 ssh_runner.go:195] Run: cat /version.json
	I0416 00:44:57.114851   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHHostname
	I0416 00:44:57.117553   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:57.117756   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:57.117866   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:72:5a", ip: ""} in network mk-kubernetes-upgrade-497059: {Iface:virbr2 ExpiryTime:2024-04-16 01:44:44 +0000 UTC Type:0 Mac:52:54:00:6b:72:5a Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:kubernetes-upgrade-497059 Clientid:01:52:54:00:6b:72:5a}
	I0416 00:44:57.117901   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined IP address 192.168.50.223 and MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:57.118060   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHPort
	I0416 00:44:57.118184   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:72:5a", ip: ""} in network mk-kubernetes-upgrade-497059: {Iface:virbr2 ExpiryTime:2024-04-16 01:44:44 +0000 UTC Type:0 Mac:52:54:00:6b:72:5a Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:kubernetes-upgrade-497059 Clientid:01:52:54:00:6b:72:5a}
	I0416 00:44:57.118221   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHKeyPath
	I0416 00:44:57.118222   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined IP address 192.168.50.223 and MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:57.118373   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHUsername
	I0416 00:44:57.118481   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHPort
	I0416 00:44:57.118547   50071 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/kubernetes-upgrade-497059/id_rsa Username:docker}
	I0416 00:44:57.118608   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHKeyPath
	I0416 00:44:57.118763   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetSSHUsername
	I0416 00:44:57.118917   50071 sshutil.go:53] new ssh client: &{IP:192.168.50.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/kubernetes-upgrade-497059/id_rsa Username:docker}
	I0416 00:44:57.199601   50071 ssh_runner.go:195] Run: systemctl --version
	I0416 00:44:57.235535   50071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:44:57.409321   50071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 00:44:57.419526   50071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:44:57.419609   50071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:44:57.438273   50071 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 00:44:57.438295   50071 start.go:494] detecting cgroup driver to use...
	I0416 00:44:57.438362   50071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:44:57.456913   50071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:44:57.471498   50071 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:44:57.471552   50071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:44:57.490846   50071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:44:57.508592   50071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:44:57.633962   50071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 00:44:57.825524   50071 docker.go:233] disabling docker service ...
	I0416 00:44:57.825611   50071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 00:44:57.841501   50071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 00:44:57.854019   50071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 00:44:57.992535   50071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 00:44:58.117899   50071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 00:44:58.132595   50071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 00:44:58.153591   50071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0416 00:44:58.153672   50071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:44:58.166229   50071 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 00:44:58.166280   50071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:44:58.177488   50071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:44:58.190204   50071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:44:58.203751   50071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 00:44:58.215617   50071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 00:44:58.227678   50071 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 00:44:58.227753   50071 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 00:44:58.243189   50071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 00:44:58.254701   50071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:44:58.392042   50071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 00:44:58.552798   50071 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 00:44:58.552873   50071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 00:44:58.559822   50071 start.go:562] Will wait 60s for crictl version
	I0416 00:44:58.559889   50071 ssh_runner.go:195] Run: which crictl
	I0416 00:44:58.564948   50071 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 00:44:58.610228   50071 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 00:44:58.610308   50071 ssh_runner.go:195] Run: crio --version
	I0416 00:44:58.642679   50071 ssh_runner.go:195] Run: crio --version
	I0416 00:44:58.681658   50071 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0416 00:44:58.683002   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetIP
	I0416 00:44:58.686188   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:58.686664   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:72:5a", ip: ""} in network mk-kubernetes-upgrade-497059: {Iface:virbr2 ExpiryTime:2024-04-16 01:44:44 +0000 UTC Type:0 Mac:52:54:00:6b:72:5a Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:kubernetes-upgrade-497059 Clientid:01:52:54:00:6b:72:5a}
	I0416 00:44:58.686698   50071 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined IP address 192.168.50.223 and MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:44:58.686999   50071 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0416 00:44:58.692704   50071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:44:58.707053   50071 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-497059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-497059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 00:44:58.707160   50071 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 00:44:58.707230   50071 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:44:58.740618   50071 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 00:44:58.740715   50071 ssh_runner.go:195] Run: which lz4
	I0416 00:44:58.745399   50071 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0416 00:44:58.750265   50071 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 00:44:58.750299   50071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0416 00:45:00.672235   50071 crio.go:462] duration metric: took 1.926868534s to copy over tarball
	I0416 00:45:00.672336   50071 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 00:45:03.698946   50071 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.026574747s)
	I0416 00:45:03.698986   50071 crio.go:469] duration metric: took 3.026720624s to extract the tarball
	I0416 00:45:03.698996   50071 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 00:45:03.743009   50071 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:45:03.795113   50071 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 00:45:03.795138   50071 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 00:45:03.795238   50071 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 00:45:03.795238   50071 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:45:03.795246   50071 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 00:45:03.795299   50071 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 00:45:03.795370   50071 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0416 00:45:03.795367   50071 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 00:45:03.795410   50071 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0416 00:45:03.795250   50071 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0416 00:45:03.797012   50071 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 00:45:03.797021   50071 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 00:45:03.797028   50071 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0416 00:45:03.797076   50071 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0416 00:45:03.797109   50071 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0416 00:45:03.797012   50071 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:45:03.797107   50071 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 00:45:03.797018   50071 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 00:45:04.005071   50071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0416 00:45:04.026158   50071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0416 00:45:04.032372   50071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 00:45:04.039169   50071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0416 00:45:04.048709   50071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0416 00:45:04.050503   50071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0416 00:45:04.052146   50071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0416 00:45:04.084888   50071 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0416 00:45:04.084930   50071 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0416 00:45:04.084997   50071 ssh_runner.go:195] Run: which crictl
	I0416 00:45:04.193122   50071 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0416 00:45:04.193175   50071 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 00:45:04.193224   50071 ssh_runner.go:195] Run: which crictl
	I0416 00:45:04.193234   50071 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0416 00:45:04.193277   50071 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 00:45:04.193322   50071 ssh_runner.go:195] Run: which crictl
	I0416 00:45:04.213476   50071 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0416 00:45:04.213524   50071 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 00:45:04.213575   50071 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0416 00:45:04.213619   50071 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 00:45:04.213662   50071 ssh_runner.go:195] Run: which crictl
	I0416 00:45:04.213586   50071 ssh_runner.go:195] Run: which crictl
	I0416 00:45:04.233063   50071 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0416 00:45:04.233111   50071 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0416 00:45:04.233125   50071 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0416 00:45:04.233149   50071 ssh_runner.go:195] Run: which crictl
	I0416 00:45:04.233166   50071 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0416 00:45:04.233200   50071 ssh_runner.go:195] Run: which crictl
	I0416 00:45:04.233204   50071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0416 00:45:04.233242   50071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0416 00:45:04.233277   50071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 00:45:04.233328   50071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0416 00:45:04.233349   50071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0416 00:45:04.347588   50071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0416 00:45:04.347685   50071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0416 00:45:04.347772   50071 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0416 00:45:04.347825   50071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0416 00:45:04.347894   50071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0416 00:45:04.347953   50071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0416 00:45:04.348014   50071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0416 00:45:04.402004   50071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0416 00:45:04.402077   50071 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0416 00:45:04.621862   50071 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:45:04.770056   50071 cache_images.go:92] duration metric: took 974.901333ms to LoadCachedImages
	W0416 00:45:04.770152   50071 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0416 00:45:04.770173   50071 kubeadm.go:928] updating node { 192.168.50.223 8443 v1.20.0 crio true true} ...
	I0416 00:45:04.770293   50071 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-497059 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-497059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 00:45:04.770385   50071 ssh_runner.go:195] Run: crio config
	I0416 00:45:04.819316   50071 cni.go:84] Creating CNI manager for ""
	I0416 00:45:04.819342   50071 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:45:04.819353   50071 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 00:45:04.819370   50071 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.223 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-497059 NodeName:kubernetes-upgrade-497059 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0416 00:45:04.819499   50071 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-497059"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 00:45:04.819569   50071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0416 00:45:04.830418   50071 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 00:45:04.830495   50071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 00:45:04.840841   50071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0416 00:45:04.858518   50071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 00:45:04.876194   50071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0416 00:45:04.893216   50071 ssh_runner.go:195] Run: grep 192.168.50.223	control-plane.minikube.internal$ /etc/hosts
	I0416 00:45:04.897207   50071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.223	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:45:04.911237   50071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:45:05.046625   50071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 00:45:05.067396   50071 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059 for IP: 192.168.50.223
	I0416 00:45:05.067419   50071 certs.go:194] generating shared ca certs ...
	I0416 00:45:05.067434   50071 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:45:05.067563   50071 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 00:45:05.067632   50071 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 00:45:05.067643   50071 certs.go:256] generating profile certs ...
	I0416 00:45:05.067692   50071 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/client.key
	I0416 00:45:05.067704   50071 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/client.crt with IP's: []
	I0416 00:45:05.251754   50071 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/client.crt ...
	I0416 00:45:05.251783   50071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/client.crt: {Name:mkdc2f41fdd881aa77203afd6ec58e29db402d0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:45:05.252006   50071 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/client.key ...
	I0416 00:45:05.252023   50071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/client.key: {Name:mk6eb8ce2b975aeb87b1d2b9e919fa6431264a6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:45:05.252102   50071 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/apiserver.key.0da69b7a
	I0416 00:45:05.252119   50071 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/apiserver.crt.0da69b7a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.223]
	I0416 00:45:05.330754   50071 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/apiserver.crt.0da69b7a ...
	I0416 00:45:05.330783   50071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/apiserver.crt.0da69b7a: {Name:mkdf3ec36b710ea2f267347d48191c85d06fde4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:45:05.330968   50071 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/apiserver.key.0da69b7a ...
	I0416 00:45:05.330988   50071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/apiserver.key.0da69b7a: {Name:mk71198af9b83fcc74d4c29c1a6d63fd366595a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:45:05.331090   50071 certs.go:381] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/apiserver.crt.0da69b7a -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/apiserver.crt
	I0416 00:45:05.331166   50071 certs.go:385] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/apiserver.key.0da69b7a -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/apiserver.key
	I0416 00:45:05.331222   50071 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/proxy-client.key
	I0416 00:45:05.331237   50071 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/proxy-client.crt with IP's: []
	I0416 00:45:05.690528   50071 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/proxy-client.crt ...
	I0416 00:45:05.690564   50071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/proxy-client.crt: {Name:mk0f7d89323fb7597ae8c2fd70d6f07cd19d56a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:45:05.690720   50071 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/proxy-client.key ...
	I0416 00:45:05.690736   50071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/proxy-client.key: {Name:mk06407bf2b455f57cd094afdf90408648686d85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:45:05.690889   50071 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 00:45:05.690931   50071 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 00:45:05.690941   50071 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 00:45:05.690961   50071 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 00:45:05.690985   50071 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 00:45:05.691008   50071 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 00:45:05.691050   50071 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:45:05.691674   50071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 00:45:05.719863   50071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 00:45:05.745435   50071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 00:45:05.773501   50071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 00:45:05.803185   50071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0416 00:45:05.874246   50071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 00:45:05.901605   50071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 00:45:05.935057   50071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 00:45:05.961844   50071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 00:45:05.996928   50071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 00:45:06.022690   50071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 00:45:06.048267   50071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 00:45:06.065435   50071 ssh_runner.go:195] Run: openssl version
	I0416 00:45:06.071022   50071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 00:45:06.081559   50071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:45:06.085956   50071 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:45:06.086012   50071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:45:06.091553   50071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 00:45:06.102261   50071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 00:45:06.113193   50071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 00:45:06.118071   50071 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 00:45:06.118130   50071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 00:45:06.123998   50071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 00:45:06.134578   50071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 00:45:06.144790   50071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 00:45:06.149252   50071 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 00:45:06.149303   50071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 00:45:06.155130   50071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 00:45:06.165999   50071 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 00:45:06.170317   50071 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 00:45:06.170370   50071 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-497059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-497059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:45:06.170441   50071 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 00:45:06.170492   50071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:45:06.209258   50071 cri.go:89] found id: ""
	I0416 00:45:06.209319   50071 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 00:45:06.219318   50071 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 00:45:06.228465   50071 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 00:45:06.237414   50071 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 00:45:06.237431   50071 kubeadm.go:156] found existing configuration files:
	
	I0416 00:45:06.237466   50071 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 00:45:06.245875   50071 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 00:45:06.245928   50071 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 00:45:06.254795   50071 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 00:45:06.263267   50071 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 00:45:06.263322   50071 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 00:45:06.272254   50071 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 00:45:06.280705   50071 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 00:45:06.280745   50071 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 00:45:06.289658   50071 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 00:45:06.299206   50071 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 00:45:06.299266   50071 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 00:45:06.308905   50071 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 00:45:06.573427   50071 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 00:47:04.819392   50071 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 00:47:04.819493   50071 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0416 00:47:04.820798   50071 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 00:47:04.820866   50071 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 00:47:04.820974   50071 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 00:47:04.821126   50071 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 00:47:04.821285   50071 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 00:47:04.821395   50071 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 00:47:04.823621   50071 out.go:204]   - Generating certificates and keys ...
	I0416 00:47:04.823693   50071 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 00:47:04.823770   50071 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 00:47:04.823883   50071 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 00:47:04.823970   50071 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 00:47:04.824072   50071 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 00:47:04.824144   50071 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 00:47:04.824217   50071 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 00:47:04.824391   50071 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-497059 localhost] and IPs [192.168.50.223 127.0.0.1 ::1]
	I0416 00:47:04.824462   50071 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 00:47:04.824616   50071 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-497059 localhost] and IPs [192.168.50.223 127.0.0.1 ::1]
	I0416 00:47:04.824718   50071 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 00:47:04.824814   50071 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 00:47:04.824884   50071 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 00:47:04.824966   50071 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 00:47:04.825053   50071 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 00:47:04.825137   50071 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 00:47:04.825244   50071 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 00:47:04.825295   50071 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 00:47:04.825448   50071 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 00:47:04.825563   50071 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 00:47:04.825613   50071 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 00:47:04.825666   50071 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 00:47:04.827233   50071 out.go:204]   - Booting up control plane ...
	I0416 00:47:04.827335   50071 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 00:47:04.827404   50071 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 00:47:04.827462   50071 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 00:47:04.827588   50071 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 00:47:04.827782   50071 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 00:47:04.827857   50071 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 00:47:04.827946   50071 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:47:04.828147   50071 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:47:04.828242   50071 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:47:04.828406   50071 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:47:04.828481   50071 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:47:04.828627   50071 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:47:04.828681   50071 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:47:04.828833   50071 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:47:04.828930   50071 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:47:04.829101   50071 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:47:04.829115   50071 kubeadm.go:309] 
	I0416 00:47:04.829208   50071 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 00:47:04.829246   50071 kubeadm.go:309] 		timed out waiting for the condition
	I0416 00:47:04.829252   50071 kubeadm.go:309] 
	I0416 00:47:04.829281   50071 kubeadm.go:309] 	This error is likely caused by:
	I0416 00:47:04.829314   50071 kubeadm.go:309] 		- The kubelet is not running
	I0416 00:47:04.829443   50071 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 00:47:04.829451   50071 kubeadm.go:309] 
	I0416 00:47:04.829534   50071 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 00:47:04.829563   50071 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 00:47:04.829592   50071 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 00:47:04.829598   50071 kubeadm.go:309] 
	I0416 00:47:04.829700   50071 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 00:47:04.829808   50071 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 00:47:04.829818   50071 kubeadm.go:309] 
	I0416 00:47:04.829958   50071 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 00:47:04.830077   50071 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 00:47:04.830187   50071 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 00:47:04.830272   50071 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 00:47:04.830297   50071 kubeadm.go:309] 
	W0416 00:47:04.830403   50071 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-497059 localhost] and IPs [192.168.50.223 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-497059 localhost] and IPs [192.168.50.223 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-497059 localhost] and IPs [192.168.50.223 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-497059 localhost] and IPs [192.168.50.223 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0416 00:47:04.830456   50071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 00:47:05.903094   50071 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.072607199s)
	I0416 00:47:05.903204   50071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:47:05.918096   50071 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 00:47:05.928632   50071 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 00:47:05.928652   50071 kubeadm.go:156] found existing configuration files:
	
	I0416 00:47:05.928694   50071 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 00:47:05.939137   50071 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 00:47:05.939209   50071 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 00:47:05.949561   50071 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 00:47:05.959794   50071 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 00:47:05.959859   50071 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 00:47:05.970325   50071 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 00:47:05.983137   50071 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 00:47:05.983189   50071 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 00:47:05.996564   50071 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 00:47:06.009540   50071 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 00:47:06.009596   50071 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 00:47:06.020857   50071 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 00:47:06.099184   50071 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 00:47:06.099321   50071 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 00:47:06.249907   50071 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 00:47:06.250035   50071 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 00:47:06.250157   50071 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 00:47:06.448592   50071 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 00:47:06.451091   50071 out.go:204]   - Generating certificates and keys ...
	I0416 00:47:06.451450   50071 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 00:47:06.451536   50071 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 00:47:06.451669   50071 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 00:47:06.451768   50071 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 00:47:06.451891   50071 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 00:47:06.452510   50071 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 00:47:06.454891   50071 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 00:47:06.455772   50071 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 00:47:06.456607   50071 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 00:47:06.457543   50071 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 00:47:06.457890   50071 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 00:47:06.457982   50071 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 00:47:06.873097   50071 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 00:47:07.121136   50071 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 00:47:07.235978   50071 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 00:47:07.330325   50071 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 00:47:07.351969   50071 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 00:47:07.352114   50071 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 00:47:07.352196   50071 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 00:47:07.534934   50071 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 00:47:07.537240   50071 out.go:204]   - Booting up control plane ...
	I0416 00:47:07.537359   50071 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 00:47:07.549324   50071 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 00:47:07.550932   50071 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 00:47:07.552085   50071 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 00:47:07.556049   50071 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 00:47:47.558629   50071 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 00:47:47.558740   50071 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:47:47.558951   50071 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:47:52.559508   50071 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:47:52.559708   50071 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:48:02.560021   50071 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:48:02.560311   50071 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:48:22.562596   50071 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:48:22.562852   50071 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:49:02.562486   50071 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:49:02.562753   50071 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:49:02.562782   50071 kubeadm.go:309] 
	I0416 00:49:02.562834   50071 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 00:49:02.562883   50071 kubeadm.go:309] 		timed out waiting for the condition
	I0416 00:49:02.562895   50071 kubeadm.go:309] 
	I0416 00:49:02.562934   50071 kubeadm.go:309] 	This error is likely caused by:
	I0416 00:49:02.562987   50071 kubeadm.go:309] 		- The kubelet is not running
	I0416 00:49:02.563127   50071 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 00:49:02.563139   50071 kubeadm.go:309] 
	I0416 00:49:02.563286   50071 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 00:49:02.563340   50071 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 00:49:02.563386   50071 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 00:49:02.563404   50071 kubeadm.go:309] 
	I0416 00:49:02.563541   50071 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 00:49:02.563665   50071 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 00:49:02.563683   50071 kubeadm.go:309] 
	I0416 00:49:02.563815   50071 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 00:49:02.563922   50071 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 00:49:02.564022   50071 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 00:49:02.564115   50071 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 00:49:02.564127   50071 kubeadm.go:309] 
	I0416 00:49:02.565138   50071 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 00:49:02.565265   50071 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 00:49:02.565350   50071 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0416 00:49:02.565425   50071 kubeadm.go:393] duration metric: took 3m56.395059566s to StartCluster
	I0416 00:49:02.565468   50071 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 00:49:02.565532   50071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 00:49:02.617823   50071 cri.go:89] found id: ""
	I0416 00:49:02.617852   50071 logs.go:276] 0 containers: []
	W0416 00:49:02.617862   50071 logs.go:278] No container was found matching "kube-apiserver"
	I0416 00:49:02.617869   50071 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 00:49:02.617954   50071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 00:49:02.670456   50071 cri.go:89] found id: ""
	I0416 00:49:02.670484   50071 logs.go:276] 0 containers: []
	W0416 00:49:02.670496   50071 logs.go:278] No container was found matching "etcd"
	I0416 00:49:02.670502   50071 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 00:49:02.670564   50071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 00:49:02.711008   50071 cri.go:89] found id: ""
	I0416 00:49:02.711039   50071 logs.go:276] 0 containers: []
	W0416 00:49:02.711050   50071 logs.go:278] No container was found matching "coredns"
	I0416 00:49:02.711056   50071 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 00:49:02.711104   50071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 00:49:02.749917   50071 cri.go:89] found id: ""
	I0416 00:49:02.749950   50071 logs.go:276] 0 containers: []
	W0416 00:49:02.749960   50071 logs.go:278] No container was found matching "kube-scheduler"
	I0416 00:49:02.749968   50071 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 00:49:02.750029   50071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 00:49:02.789148   50071 cri.go:89] found id: ""
	I0416 00:49:02.789196   50071 logs.go:276] 0 containers: []
	W0416 00:49:02.789208   50071 logs.go:278] No container was found matching "kube-proxy"
	I0416 00:49:02.789216   50071 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 00:49:02.789293   50071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 00:49:02.833712   50071 cri.go:89] found id: ""
	I0416 00:49:02.833743   50071 logs.go:276] 0 containers: []
	W0416 00:49:02.833755   50071 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 00:49:02.833769   50071 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 00:49:02.833838   50071 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 00:49:02.875988   50071 cri.go:89] found id: ""
	I0416 00:49:02.876018   50071 logs.go:276] 0 containers: []
	W0416 00:49:02.876030   50071 logs.go:278] No container was found matching "kindnet"
	I0416 00:49:02.876041   50071 logs.go:123] Gathering logs for dmesg ...
	I0416 00:49:02.876057   50071 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 00:49:02.891879   50071 logs.go:123] Gathering logs for describe nodes ...
	I0416 00:49:02.891909   50071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 00:49:03.034828   50071 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 00:49:03.034858   50071 logs.go:123] Gathering logs for CRI-O ...
	I0416 00:49:03.034875   50071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 00:49:03.152370   50071 logs.go:123] Gathering logs for container status ...
	I0416 00:49:03.152403   50071 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 00:49:03.197777   50071 logs.go:123] Gathering logs for kubelet ...
	I0416 00:49:03.197805   50071 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0416 00:49:03.251037   50071 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0416 00:49:03.251079   50071 out.go:239] * 
	* 
	W0416 00:49:03.251131   50071 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 00:49:03.251151   50071 out.go:239] * 
	* 
	W0416 00:49:03.251982   50071 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 00:49:03.254936   50071 out.go:177] 
	W0416 00:49:03.256332   50071 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 00:49:03.256412   50071 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0416 00:49:03.256433   50071 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0416 00:49:03.257998   50071 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-497059 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-497059
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-497059: (2.546459365s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-497059 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-497059 status --format={{.Host}}: exit status 7 (75.458288ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-497059 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-497059 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (39.178064633s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-497059 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-497059 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-497059 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (93.524975ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-497059] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-497059
	    minikube start -p kubernetes-upgrade-497059 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4970592 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-497059 --kubernetes-version=v1.30.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-497059 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-497059 --memory=2200 --kubernetes-version=v1.30.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.007013451s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-04-16 00:50:28.28297948 +0000 UTC m=+4370.799740806
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-497059 -n kubernetes-upgrade-497059
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-497059 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-497059 logs -n 25: (1.705623917s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p cilium-381983 sudo                  | cilium-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:47 UTC |                     |
	|         | systemctl status containerd            |                           |         |                |                     |                     |
	|         | --all --full --no-pager                |                           |         |                |                     |                     |
	| ssh     | -p cilium-381983 sudo                  | cilium-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:47 UTC |                     |
	|         | systemctl cat containerd               |                           |         |                |                     |                     |
	|         | --no-pager                             |                           |         |                |                     |                     |
	| ssh     | -p cilium-381983 sudo cat              | cilium-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:47 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |                |                     |                     |
	| ssh     | -p cilium-381983 sudo cat              | cilium-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:47 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |                |                     |                     |
	| ssh     | -p cilium-381983 sudo                  | cilium-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:47 UTC |                     |
	|         | containerd config dump                 |                           |         |                |                     |                     |
	| ssh     | -p cilium-381983 sudo                  | cilium-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:47 UTC |                     |
	|         | systemctl status crio --all            |                           |         |                |                     |                     |
	|         | --full --no-pager                      |                           |         |                |                     |                     |
	| ssh     | -p cilium-381983 sudo                  | cilium-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:47 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |                |                     |                     |
	| ssh     | -p cilium-381983 sudo find             | cilium-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:47 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |                |                     |                     |
	| ssh     | -p cilium-381983 sudo crio             | cilium-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:47 UTC |                     |
	|         | config                                 |                           |         |                |                     |                     |
	| delete  | -p cilium-381983                       | cilium-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:47 UTC | 16 Apr 24 00:47 UTC |
	| start   | -p force-systemd-flag-200746           | force-systemd-flag-200746 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:47 UTC | 16 Apr 24 00:49 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |                |                     |                     |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| delete  | -p force-systemd-env-787358            | force-systemd-env-787358  | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:48 UTC | 16 Apr 24 00:48 UTC |
	| start   | -p pause-214771 --memory=2048          | pause-214771              | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:48 UTC | 16 Apr 24 00:49 UTC |
	|         | --install-addons=false                 |                           |         |                |                     |                     |
	|         | --wait=all --driver=kvm2               |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-497059           | kubernetes-upgrade-497059 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:49 UTC | 16 Apr 24 00:49 UTC |
	| ssh     | force-systemd-flag-200746 ssh cat      | force-systemd-flag-200746 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:49 UTC | 16 Apr 24 00:49 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-flag-200746           | force-systemd-flag-200746 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:49 UTC | 16 Apr 24 00:49 UTC |
	| start   | -p kubernetes-upgrade-497059           | kubernetes-upgrade-497059 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:49 UTC | 16 Apr 24 00:49 UTC |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2      |                           |         |                |                     |                     |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| start   | -p cert-options-752506                 | cert-options-752506       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:49 UTC | 16 Apr 24 00:50 UTC |
	|         | --memory=2048                          |                           |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |                |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |                |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |                |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |                |                     |                     |
	|         | --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| start   | -p pause-214771                        | pause-214771              | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:49 UTC |                     |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-497059           | kubernetes-upgrade-497059 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:49 UTC |                     |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |                |                     |                     |
	|         | --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| start   | -p kubernetes-upgrade-497059           | kubernetes-upgrade-497059 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:49 UTC | 16 Apr 24 00:50 UTC |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2      |                           |         |                |                     |                     |
	|         | --alsologtostderr                      |                           |         |                |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	| ssh     | cert-options-752506 ssh                | cert-options-752506       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:50 UTC | 16 Apr 24 00:50 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |                |                     |                     |
	| ssh     | -p cert-options-752506 -- sudo         | cert-options-752506       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:50 UTC | 16 Apr 24 00:50 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |                |                     |                     |
	| delete  | -p cert-options-752506                 | cert-options-752506       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:50 UTC | 16 Apr 24 00:50 UTC |
	| start   | -p old-k8s-version-800769              | old-k8s-version-800769    | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:50 UTC |                     |
	|         | --memory=2200                          |                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |                |                     |                     |
	|         | --kvm-network=default                  |                           |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |                |                     |                     |
	|         | --disable-driver-mounts                |                           |         |                |                     |                     |
	|         | --keep-context=false                   |                           |         |                |                     |                     |
	|         | --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio               |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |                |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 00:50:14
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 00:50:14.742037   57518 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:50:14.742276   57518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:50:14.742287   57518 out.go:304] Setting ErrFile to fd 2...
	I0416 00:50:14.742291   57518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:50:14.742500   57518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:50:14.743128   57518 out.go:298] Setting JSON to false
	I0416 00:50:14.744169   57518 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5559,"bootTime":1713223056,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 00:50:14.744227   57518 start.go:139] virtualization: kvm guest
	I0416 00:50:14.746250   57518 out.go:177] * [old-k8s-version-800769] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 00:50:14.747801   57518 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 00:50:14.747831   57518 notify.go:220] Checking for updates...
	I0416 00:50:14.749303   57518 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 00:50:14.750681   57518 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 00:50:14.752098   57518 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:50:14.753531   57518 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 00:50:14.754639   57518 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 00:50:14.756330   57518 config.go:182] Loaded profile config "cert-expiration-359535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:50:14.756421   57518 config.go:182] Loaded profile config "kubernetes-upgrade-497059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 00:50:14.756529   57518 config.go:182] Loaded profile config "pause-214771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:50:14.756626   57518 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 00:50:14.794372   57518 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 00:50:14.795566   57518 start.go:297] selected driver: kvm2
	I0416 00:50:14.795580   57518 start.go:901] validating driver "kvm2" against <nil>
	I0416 00:50:14.795592   57518 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 00:50:14.796324   57518 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:50:14.796398   57518 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 00:50:14.812045   57518 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 00:50:14.812088   57518 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 00:50:14.812270   57518 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 00:50:14.812330   57518 cni.go:84] Creating CNI manager for ""
	I0416 00:50:14.812342   57518 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:50:14.812351   57518 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0416 00:50:14.812399   57518 start.go:340] cluster config:
	{Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:50:14.812489   57518 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:50:14.814187   57518 out.go:177] * Starting "old-k8s-version-800769" primary control-plane node in "old-k8s-version-800769" cluster
	I0416 00:50:14.815477   57518 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 00:50:14.815509   57518 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0416 00:50:14.815519   57518 cache.go:56] Caching tarball of preloaded images
	I0416 00:50:14.815610   57518 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 00:50:14.815621   57518 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0416 00:50:14.815702   57518 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/config.json ...
	I0416 00:50:14.815718   57518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/config.json: {Name:mk188ed0158cdcdef6a943cf87c78b08b315b06c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:50:14.815830   57518 start.go:360] acquireMachinesLock for old-k8s-version-800769: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:50:14.815857   57518 start.go:364] duration metric: took 13.41µs to acquireMachinesLock for "old-k8s-version-800769"
	I0416 00:50:14.815874   57518 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 00:50:14.815956   57518 start.go:125] createHost starting for "" (driver="kvm2")
	I0416 00:50:10.349251   57159 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 00:50:10.518996   57159 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 00:50:10.535239   57159 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 00:50:10.557349   57159 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 00:50:10.557416   57159 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:50:10.571035   57159 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 00:50:10.571127   57159 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:50:10.582450   57159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:50:10.595218   57159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:50:10.607812   57159 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 00:50:10.619246   57159 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:50:10.631753   57159 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:50:10.646579   57159 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:50:10.657339   57159 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 00:50:10.667219   57159 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 00:50:10.677683   57159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:50:10.837938   57159 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 00:50:14.403764   57054 api_server.go:279] https://192.168.39.182:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 00:50:14.403817   57054 api_server.go:103] status: https://192.168.39.182:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 00:50:14.403833   57054 api_server.go:253] Checking apiserver healthz at https://192.168.39.182:8443/healthz ...
	I0416 00:50:14.530505   57054 api_server.go:279] https://192.168.39.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 00:50:14.530535   57054 api_server.go:103] status: https://192.168.39.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 00:50:14.616695   57054 api_server.go:253] Checking apiserver healthz at https://192.168.39.182:8443/healthz ...
	I0416 00:50:14.621700   57054 api_server.go:279] https://192.168.39.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 00:50:14.621727   57054 api_server.go:103] status: https://192.168.39.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 00:50:15.116869   57054 api_server.go:253] Checking apiserver healthz at https://192.168.39.182:8443/healthz ...
	I0416 00:50:15.124953   57054 api_server.go:279] https://192.168.39.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 00:50:15.124986   57054 api_server.go:103] status: https://192.168.39.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 00:50:15.616468   57054 api_server.go:253] Checking apiserver healthz at https://192.168.39.182:8443/healthz ...
	I0416 00:50:15.622286   57054 api_server.go:279] https://192.168.39.182:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 00:50:15.622316   57054 api_server.go:103] status: https://192.168.39.182:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 00:50:16.116925   57054 api_server.go:253] Checking apiserver healthz at https://192.168.39.182:8443/healthz ...
	I0416 00:50:16.122736   57054 api_server.go:279] https://192.168.39.182:8443/healthz returned 200:
	ok
	I0416 00:50:16.131015   57054 api_server.go:141] control plane version: v1.29.3
	I0416 00:50:16.131041   57054 api_server.go:131] duration metric: took 5.01515465s to wait for apiserver health ...
	I0416 00:50:16.131048   57054 cni.go:84] Creating CNI manager for ""
	I0416 00:50:16.131054   57054 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:50:16.133056   57054 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 00:50:17.245986   57159 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.408005012s)
	I0416 00:50:17.246023   57159 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 00:50:17.246074   57159 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 00:50:17.251570   57159 start.go:562] Will wait 60s for crictl version
	I0416 00:50:17.251634   57159 ssh_runner.go:195] Run: which crictl
	I0416 00:50:17.255683   57159 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 00:50:17.294803   57159 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 00:50:17.294889   57159 ssh_runner.go:195] Run: crio --version
	I0416 00:50:17.326255   57159 ssh_runner.go:195] Run: crio --version
	I0416 00:50:17.363004   57159 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0416 00:50:16.134637   57054 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 00:50:16.146871   57054 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 00:50:16.167811   57054 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 00:50:16.180517   57054 system_pods.go:59] 6 kube-system pods found
	I0416 00:50:16.180555   57054 system_pods.go:61] "coredns-76f75df574-7rt2n" [52b91d16-27db-4df4-8666-be46537119c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 00:50:16.180565   57054 system_pods.go:61] "etcd-pause-214771" [56d9eb66-dd52-41fc-8cdd-8735f4a48639] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 00:50:16.180574   57054 system_pods.go:61] "kube-apiserver-pause-214771" [6e898da2-8da0-4a9c-831c-1c629126e06f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 00:50:16.180584   57054 system_pods.go:61] "kube-controller-manager-pause-214771" [f490fc56-8bc1-43ef-9377-5a53e0a44107] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 00:50:16.180594   57054 system_pods.go:61] "kube-proxy-6g5dt" [0bbf4e46-a886-4f4b-a263-a79fe5b846d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0416 00:50:16.180601   57054 system_pods.go:61] "kube-scheduler-pause-214771" [d65a8276-c650-482c-8636-176627f1a7b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 00:50:16.180610   57054 system_pods.go:74] duration metric: took 12.774116ms to wait for pod list to return data ...
	I0416 00:50:16.180622   57054 node_conditions.go:102] verifying NodePressure condition ...
	I0416 00:50:16.185293   57054 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 00:50:16.185324   57054 node_conditions.go:123] node cpu capacity is 2
	I0416 00:50:16.185337   57054 node_conditions.go:105] duration metric: took 4.709555ms to run NodePressure ...
	I0416 00:50:16.185357   57054 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 00:50:16.491413   57054 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 00:50:16.497403   57054 kubeadm.go:733] kubelet initialised
	I0416 00:50:16.497430   57054 kubeadm.go:734] duration metric: took 5.993774ms waiting for restarted kubelet to initialise ...
	I0416 00:50:16.497441   57054 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 00:50:16.504117   57054 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-7rt2n" in "kube-system" namespace to be "Ready" ...
	I0416 00:50:18.515240   57054 pod_ready.go:102] pod "coredns-76f75df574-7rt2n" in "kube-system" namespace has status "Ready":"False"
	I0416 00:50:14.817627   57518 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 00:50:14.817754   57518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:50:14.817785   57518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:50:14.832633   57518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37955
	I0416 00:50:14.833110   57518 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:50:14.833640   57518 main.go:141] libmachine: Using API Version  1
	I0416 00:50:14.833678   57518 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:50:14.834081   57518 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:50:14.834342   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:50:14.834529   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:50:14.834720   57518 start.go:159] libmachine.API.Create for "old-k8s-version-800769" (driver="kvm2")
	I0416 00:50:14.834748   57518 client.go:168] LocalClient.Create starting
	I0416 00:50:14.834785   57518 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem
	I0416 00:50:14.834826   57518 main.go:141] libmachine: Decoding PEM data...
	I0416 00:50:14.834847   57518 main.go:141] libmachine: Parsing certificate...
	I0416 00:50:14.834929   57518 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem
	I0416 00:50:14.834956   57518 main.go:141] libmachine: Decoding PEM data...
	I0416 00:50:14.834982   57518 main.go:141] libmachine: Parsing certificate...
	I0416 00:50:14.835007   57518 main.go:141] libmachine: Running pre-create checks...
	I0416 00:50:14.835024   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .PreCreateCheck
	I0416 00:50:14.835568   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetConfigRaw
	I0416 00:50:14.836019   57518 main.go:141] libmachine: Creating machine...
	I0416 00:50:14.836038   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .Create
	I0416 00:50:14.836223   57518 main.go:141] libmachine: (old-k8s-version-800769) Creating KVM machine...
	I0416 00:50:14.837473   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | found existing default KVM network
	I0416 00:50:14.838719   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:14.838562   57541 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f6:f2:02} reservation:<nil>}
	I0416 00:50:14.839456   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:14.839364   57541 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:8c:05:70} reservation:<nil>}
	I0416 00:50:14.840234   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:14.840143   57541 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:a2:7e:7e} reservation:<nil>}
	I0416 00:50:14.842389   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:14.842255   57541 network.go:209] skipping subnet 192.168.72.0/24 that is reserved: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0416 00:50:14.843540   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:14.843452   57541 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112c30}
	I0416 00:50:14.843594   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | created network xml: 
	I0416 00:50:14.843622   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | <network>
	I0416 00:50:14.843637   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG |   <name>mk-old-k8s-version-800769</name>
	I0416 00:50:14.843660   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG |   <dns enable='no'/>
	I0416 00:50:14.843687   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG |   
	I0416 00:50:14.843711   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0416 00:50:14.843724   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG |     <dhcp>
	I0416 00:50:14.843737   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0416 00:50:14.843761   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG |     </dhcp>
	I0416 00:50:14.843770   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG |   </ip>
	I0416 00:50:14.843779   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG |   
	I0416 00:50:14.843794   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | </network>
	I0416 00:50:14.843823   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | 
	I0416 00:50:14.848837   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | trying to create private KVM network mk-old-k8s-version-800769 192.168.83.0/24...
	I0416 00:50:14.915063   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | private KVM network mk-old-k8s-version-800769 192.168.83.0/24 created
	I0416 00:50:14.915099   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:14.915037   57541 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:50:14.915112   57518 main.go:141] libmachine: (old-k8s-version-800769) Setting up store path in /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769 ...
	I0416 00:50:14.915129   57518 main.go:141] libmachine: (old-k8s-version-800769) Building disk image from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso
	I0416 00:50:14.915188   57518 main.go:141] libmachine: (old-k8s-version-800769) Downloading /home/jenkins/minikube-integration/18647-7542/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0416 00:50:15.135863   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:15.135715   57541 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa...
	I0416 00:50:15.308682   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:15.308573   57541 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/old-k8s-version-800769.rawdisk...
	I0416 00:50:15.308709   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Writing magic tar header
	I0416 00:50:15.308722   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Writing SSH key tar header
	I0416 00:50:15.308730   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:15.308688   57541 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769 ...
	I0416 00:50:15.308812   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769
	I0416 00:50:15.308832   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines
	I0416 00:50:15.308841   57518 main.go:141] libmachine: (old-k8s-version-800769) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769 (perms=drwx------)
	I0416 00:50:15.308850   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:50:15.308860   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542
	I0416 00:50:15.308868   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 00:50:15.308877   57518 main.go:141] libmachine: (old-k8s-version-800769) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines (perms=drwxr-xr-x)
	I0416 00:50:15.308888   57518 main.go:141] libmachine: (old-k8s-version-800769) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube (perms=drwxr-xr-x)
	I0416 00:50:15.308897   57518 main.go:141] libmachine: (old-k8s-version-800769) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542 (perms=drwxrwxr-x)
	I0416 00:50:15.308909   57518 main.go:141] libmachine: (old-k8s-version-800769) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 00:50:15.308916   57518 main.go:141] libmachine: (old-k8s-version-800769) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 00:50:15.308922   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Checking permissions on dir: /home/jenkins
	I0416 00:50:15.308930   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Checking permissions on dir: /home
	I0416 00:50:15.308936   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Skipping /home - not owner
	I0416 00:50:15.308945   57518 main.go:141] libmachine: (old-k8s-version-800769) Creating domain...
	I0416 00:50:15.310097   57518 main.go:141] libmachine: (old-k8s-version-800769) define libvirt domain using xml: 
	I0416 00:50:15.310116   57518 main.go:141] libmachine: (old-k8s-version-800769) <domain type='kvm'>
	I0416 00:50:15.310124   57518 main.go:141] libmachine: (old-k8s-version-800769)   <name>old-k8s-version-800769</name>
	I0416 00:50:15.310129   57518 main.go:141] libmachine: (old-k8s-version-800769)   <memory unit='MiB'>2200</memory>
	I0416 00:50:15.310135   57518 main.go:141] libmachine: (old-k8s-version-800769)   <vcpu>2</vcpu>
	I0416 00:50:15.310155   57518 main.go:141] libmachine: (old-k8s-version-800769)   <features>
	I0416 00:50:15.310169   57518 main.go:141] libmachine: (old-k8s-version-800769)     <acpi/>
	I0416 00:50:15.310176   57518 main.go:141] libmachine: (old-k8s-version-800769)     <apic/>
	I0416 00:50:15.310183   57518 main.go:141] libmachine: (old-k8s-version-800769)     <pae/>
	I0416 00:50:15.310188   57518 main.go:141] libmachine: (old-k8s-version-800769)     
	I0416 00:50:15.310194   57518 main.go:141] libmachine: (old-k8s-version-800769)   </features>
	I0416 00:50:15.310202   57518 main.go:141] libmachine: (old-k8s-version-800769)   <cpu mode='host-passthrough'>
	I0416 00:50:15.310207   57518 main.go:141] libmachine: (old-k8s-version-800769)   
	I0416 00:50:15.310215   57518 main.go:141] libmachine: (old-k8s-version-800769)   </cpu>
	I0416 00:50:15.310236   57518 main.go:141] libmachine: (old-k8s-version-800769)   <os>
	I0416 00:50:15.310256   57518 main.go:141] libmachine: (old-k8s-version-800769)     <type>hvm</type>
	I0416 00:50:15.310263   57518 main.go:141] libmachine: (old-k8s-version-800769)     <boot dev='cdrom'/>
	I0416 00:50:15.310269   57518 main.go:141] libmachine: (old-k8s-version-800769)     <boot dev='hd'/>
	I0416 00:50:15.310278   57518 main.go:141] libmachine: (old-k8s-version-800769)     <bootmenu enable='no'/>
	I0416 00:50:15.310283   57518 main.go:141] libmachine: (old-k8s-version-800769)   </os>
	I0416 00:50:15.310291   57518 main.go:141] libmachine: (old-k8s-version-800769)   <devices>
	I0416 00:50:15.310297   57518 main.go:141] libmachine: (old-k8s-version-800769)     <disk type='file' device='cdrom'>
	I0416 00:50:15.310308   57518 main.go:141] libmachine: (old-k8s-version-800769)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/boot2docker.iso'/>
	I0416 00:50:15.310313   57518 main.go:141] libmachine: (old-k8s-version-800769)       <target dev='hdc' bus='scsi'/>
	I0416 00:50:15.310319   57518 main.go:141] libmachine: (old-k8s-version-800769)       <readonly/>
	I0416 00:50:15.310327   57518 main.go:141] libmachine: (old-k8s-version-800769)     </disk>
	I0416 00:50:15.310337   57518 main.go:141] libmachine: (old-k8s-version-800769)     <disk type='file' device='disk'>
	I0416 00:50:15.310351   57518 main.go:141] libmachine: (old-k8s-version-800769)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 00:50:15.310368   57518 main.go:141] libmachine: (old-k8s-version-800769)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/old-k8s-version-800769.rawdisk'/>
	I0416 00:50:15.310380   57518 main.go:141] libmachine: (old-k8s-version-800769)       <target dev='hda' bus='virtio'/>
	I0416 00:50:15.310389   57518 main.go:141] libmachine: (old-k8s-version-800769)     </disk>
	I0416 00:50:15.310401   57518 main.go:141] libmachine: (old-k8s-version-800769)     <interface type='network'>
	I0416 00:50:15.310415   57518 main.go:141] libmachine: (old-k8s-version-800769)       <source network='mk-old-k8s-version-800769'/>
	I0416 00:50:15.310424   57518 main.go:141] libmachine: (old-k8s-version-800769)       <model type='virtio'/>
	I0416 00:50:15.310456   57518 main.go:141] libmachine: (old-k8s-version-800769)     </interface>
	I0416 00:50:15.310476   57518 main.go:141] libmachine: (old-k8s-version-800769)     <interface type='network'>
	I0416 00:50:15.310488   57518 main.go:141] libmachine: (old-k8s-version-800769)       <source network='default'/>
	I0416 00:50:15.310500   57518 main.go:141] libmachine: (old-k8s-version-800769)       <model type='virtio'/>
	I0416 00:50:15.310511   57518 main.go:141] libmachine: (old-k8s-version-800769)     </interface>
	I0416 00:50:15.310527   57518 main.go:141] libmachine: (old-k8s-version-800769)     <serial type='pty'>
	I0416 00:50:15.310541   57518 main.go:141] libmachine: (old-k8s-version-800769)       <target port='0'/>
	I0416 00:50:15.310553   57518 main.go:141] libmachine: (old-k8s-version-800769)     </serial>
	I0416 00:50:15.310567   57518 main.go:141] libmachine: (old-k8s-version-800769)     <console type='pty'>
	I0416 00:50:15.310579   57518 main.go:141] libmachine: (old-k8s-version-800769)       <target type='serial' port='0'/>
	I0416 00:50:15.310592   57518 main.go:141] libmachine: (old-k8s-version-800769)     </console>
	I0416 00:50:15.310608   57518 main.go:141] libmachine: (old-k8s-version-800769)     <rng model='virtio'>
	I0416 00:50:15.310624   57518 main.go:141] libmachine: (old-k8s-version-800769)       <backend model='random'>/dev/random</backend>
	I0416 00:50:15.310636   57518 main.go:141] libmachine: (old-k8s-version-800769)     </rng>
	I0416 00:50:15.310649   57518 main.go:141] libmachine: (old-k8s-version-800769)     
	I0416 00:50:15.310660   57518 main.go:141] libmachine: (old-k8s-version-800769)     
	I0416 00:50:15.310670   57518 main.go:141] libmachine: (old-k8s-version-800769)   </devices>
	I0416 00:50:15.310685   57518 main.go:141] libmachine: (old-k8s-version-800769) </domain>
	I0416 00:50:15.310699   57518 main.go:141] libmachine: (old-k8s-version-800769) 
	I0416 00:50:15.314754   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a0:db:07 in network default
	I0416 00:50:15.315363   57518 main.go:141] libmachine: (old-k8s-version-800769) Ensuring networks are active...
	I0416 00:50:15.315390   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:15.316041   57518 main.go:141] libmachine: (old-k8s-version-800769) Ensuring network default is active
	I0416 00:50:15.316433   57518 main.go:141] libmachine: (old-k8s-version-800769) Ensuring network mk-old-k8s-version-800769 is active
	I0416 00:50:15.317088   57518 main.go:141] libmachine: (old-k8s-version-800769) Getting domain xml...
	I0416 00:50:15.317857   57518 main.go:141] libmachine: (old-k8s-version-800769) Creating domain...
	I0416 00:50:16.617402   57518 main.go:141] libmachine: (old-k8s-version-800769) Waiting to get IP...
	I0416 00:50:16.618172   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:16.618730   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:16.618788   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:16.618717   57541 retry.go:31] will retry after 245.38194ms: waiting for machine to come up
	I0416 00:50:16.866167   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:16.866699   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:16.866726   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:16.866659   57541 retry.go:31] will retry after 276.679462ms: waiting for machine to come up
	I0416 00:50:17.145220   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:17.145785   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:17.145817   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:17.145732   57541 retry.go:31] will retry after 462.312932ms: waiting for machine to come up
	I0416 00:50:17.609346   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:17.609922   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:17.609951   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:17.609880   57541 retry.go:31] will retry after 444.438479ms: waiting for machine to come up
	I0416 00:50:18.055580   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:18.056026   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:18.056054   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:18.056000   57541 retry.go:31] will retry after 651.351774ms: waiting for machine to come up
	I0416 00:50:18.708903   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:18.709388   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:18.709421   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:18.709351   57541 retry.go:31] will retry after 639.944351ms: waiting for machine to come up
	I0416 00:50:19.350987   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:19.351503   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:19.351529   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:19.351457   57541 retry.go:31] will retry after 715.915795ms: waiting for machine to come up
	I0416 00:50:17.364326   57159 main.go:141] libmachine: (kubernetes-upgrade-497059) Calling .GetIP
	I0416 00:50:17.367399   57159 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:50:17.367843   57159 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:72:5a", ip: ""} in network mk-kubernetes-upgrade-497059: {Iface:virbr2 ExpiryTime:2024-04-16 01:44:44 +0000 UTC Type:0 Mac:52:54:00:6b:72:5a Iaid: IPaddr:192.168.50.223 Prefix:24 Hostname:kubernetes-upgrade-497059 Clientid:01:52:54:00:6b:72:5a}
	I0416 00:50:17.367874   57159 main.go:141] libmachine: (kubernetes-upgrade-497059) DBG | domain kubernetes-upgrade-497059 has defined IP address 192.168.50.223 and MAC address 52:54:00:6b:72:5a in network mk-kubernetes-upgrade-497059
	I0416 00:50:17.368120   57159 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0416 00:50:17.372872   57159 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-497059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0-rc.2 ClusterName:kubernetes-upgrade-497059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 00:50:17.372990   57159 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0416 00:50:17.373055   57159 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:50:17.424226   57159 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 00:50:17.424254   57159 crio.go:433] Images already preloaded, skipping extraction
	I0416 00:50:17.424320   57159 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:50:17.464997   57159 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 00:50:17.465043   57159 cache_images.go:84] Images are preloaded, skipping loading
	I0416 00:50:17.465053   57159 kubeadm.go:928] updating node { 192.168.50.223 8443 v1.30.0-rc.2 crio true true} ...
	I0416 00:50:17.465209   57159 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-497059 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:kubernetes-upgrade-497059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 00:50:17.465299   57159 ssh_runner.go:195] Run: crio config
	I0416 00:50:17.524366   57159 cni.go:84] Creating CNI manager for ""
	I0416 00:50:17.524390   57159 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:50:17.524403   57159 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 00:50:17.524446   57159 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.223 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-497059 NodeName:kubernetes-upgrade-497059 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 00:50:17.524659   57159 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-497059"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 00:50:17.524750   57159 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0416 00:50:17.540000   57159 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 00:50:17.540097   57159 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 00:50:17.554366   57159 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (330 bytes)
	I0416 00:50:17.576160   57159 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0416 00:50:17.598286   57159 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0416 00:50:17.620313   57159 ssh_runner.go:195] Run: grep 192.168.50.223	control-plane.minikube.internal$ /etc/hosts
	I0416 00:50:17.624606   57159 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:50:17.776015   57159 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 00:50:17.795429   57159 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059 for IP: 192.168.50.223
	I0416 00:50:17.795455   57159 certs.go:194] generating shared ca certs ...
	I0416 00:50:17.795475   57159 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:50:17.795671   57159 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 00:50:17.795709   57159 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 00:50:17.795719   57159 certs.go:256] generating profile certs ...
	I0416 00:50:17.795801   57159 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/client.key
	I0416 00:50:17.795843   57159 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/apiserver.key.0da69b7a
	I0416 00:50:17.795888   57159 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/proxy-client.key
	I0416 00:50:17.796008   57159 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 00:50:17.796057   57159 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 00:50:17.796073   57159 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 00:50:17.796102   57159 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 00:50:17.796133   57159 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 00:50:17.796169   57159 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 00:50:17.796226   57159 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:50:17.796884   57159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 00:50:17.825347   57159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 00:50:17.856292   57159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 00:50:17.884544   57159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 00:50:17.916735   57159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0416 00:50:17.943963   57159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 00:50:17.971171   57159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 00:50:17.998586   57159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/kubernetes-upgrade-497059/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 00:50:18.028539   57159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 00:50:18.062215   57159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 00:50:18.092545   57159 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 00:50:18.122545   57159 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 00:50:18.142001   57159 ssh_runner.go:195] Run: openssl version
	I0416 00:50:18.148943   57159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 00:50:18.162150   57159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:50:18.167272   57159 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:50:18.167335   57159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:50:18.175719   57159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 00:50:18.190325   57159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 00:50:18.202432   57159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 00:50:18.209217   57159 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 00:50:18.209277   57159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 00:50:18.215749   57159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 00:50:18.226921   57159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 00:50:18.240728   57159 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 00:50:18.245917   57159 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 00:50:18.245999   57159 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 00:50:18.252673   57159 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 00:50:18.266327   57159 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 00:50:18.272604   57159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 00:50:18.280842   57159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 00:50:18.286960   57159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 00:50:18.293139   57159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 00:50:18.299204   57159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 00:50:18.305511   57159 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 00:50:18.311710   57159 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-497059 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.0-rc.2 ClusterName:kubernetes-upgrade-497059 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.223 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:50:18.311791   57159 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 00:50:18.311848   57159 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:50:18.357696   57159 cri.go:89] found id: "60fd8d3fdd22be343ea28083f999cf7e70c47249ed2d6e6a7745824eebfb904d"
	I0416 00:50:18.357716   57159 cri.go:89] found id: "9031b12c4dca2e17bac84072b619f64cae9ba5bac739160f9b815ef05e053dd1"
	I0416 00:50:18.357722   57159 cri.go:89] found id: "57ad3827fbc46cc40a4ba8d073424ffe035e6c2c54d9f49b5461eb9dfd7bc887"
	I0416 00:50:18.357729   57159 cri.go:89] found id: "88897e3aafa04eb039597f9a8f933a6d0dc6582fd274160637cd9ef34a2ee125"
	I0416 00:50:18.357734   57159 cri.go:89] found id: "460f403a1726d5daef35f87c148076d38bd2fcb0233873128787dcae4a86b99a"
	I0416 00:50:18.357738   57159 cri.go:89] found id: "6f520c4c350e9f56c70c43314ebcba34563bafa0a8b0b030746ce453bdfc337d"
	I0416 00:50:18.357742   57159 cri.go:89] found id: "46c0226492e7eb8142e4645002e5b127b7a46bb8431802767af86790298ecdf2"
	I0416 00:50:18.357745   57159 cri.go:89] found id: "b1e03c16f2cb388bd4b3ccb1cc7631c9f62b0f8e1fd63793706b247a3f1a7e52"
	I0416 00:50:18.357749   57159 cri.go:89] found id: ""
	I0416 00:50:18.357800   57159 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Apr 16 00:50:28 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:28.991124359Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713228628991098190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124378,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=864a35e2-424f-4e6b-9804-4c47b70f5c9a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:50:28 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:28.991853752Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4c0e976b-4fac-443e-8a98-087ee100da43 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:50:28 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:28.991938350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4c0e976b-4fac-443e-8a98-087ee100da43 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:50:28 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:28.992509543Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0bc87c90ab088fd9160e3be5285dfbad9254359e42e41254309789a37b7f4fe,PodSandboxId:36d201ce25db6cf474f6fb528e2ab4bad4797117e84fe4b1ce22c16db532ad38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713228626317841857,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cthhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b41d80-ded7-4d88-bdab-04ff2cd4f4f1,},Annotations:map[string]string{io.kubernetes.container.hash: c91520ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:626d5e0646ed22bb41464f4c2c4c9bd851852d7d4bbde32d674129c90b4eb8aa,PodSandboxId:2b26d7fe01f1f59187792b3fc3a7cb10a727210e3670588cd44594281ac62762,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713228626323707364,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq9m7,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b2a3f73c-541d-47cc-93f9-9a5830fc41b5,},Annotations:map[string]string{io.kubernetes.container.hash: ccf94f13,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552faf07d248424277fbc2b83093318635170698bfacf769b3ec2152508bbcdd,PodSandboxId:15f9ee2aec82da5f3f5357714e315006dc9a77f39d71a2cab845e5602f64b604,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAIN
ER_RUNNING,CreatedAt:1713228625813774340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rkc8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d46a68d-3a90-4dd9-8a56-1ad2704d2727,},Annotations:map[string]string{io.kubernetes.container.hash: 18574667,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac69e482d6c1abea0801a6808c5fe41126cab364f1e41ae18a06b9d12a2e4c07,PodSandboxId:26afa0c121add30d4f289ae6365b69a49e521847768accf8e2bf67a9af650b64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171
3228625691161103,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c3c90cf-ec34-4097-a603-f312a5624496,},Annotations:map[string]string{io.kubernetes.container.hash: dea44047,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91d42e2a6395bab22756528ab35c855d64027e69450f895f4761686034376fb1,PodSandboxId:0f21d77108b4a06fb8e99953d7ee5668fa76c2232d378cd460c83e69dee48471,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713228620883635448,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2829e4d0698ea0053187057bd461daf5,},Annotations:map[string]string{io.kubernetes.container.hash: 80ecc69d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00d3f1e024b69678d201f5e5cfff2c00341de33dee9fc65b4cc05586dadef0c5,PodSandboxId:0f035f362baa607660ce92c1bbf672f48927b10c33ff059f7a7625c0c37c115b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713228620927980177,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a302621bee944cdf682c38dd80eae459,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3d722ab3edb87f6404c922ba91670e54cb76c12b0b73c44d618d6391ea32d3,PodSandboxId:a8e9f1a6a9c5bcc76d669bded8b8e8a8f044d9c1a59d626548c63422c9fa6f51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713228620858631922,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c2822c3050d506f0c6e5ae555becfb6,},Annotations:map[string]string{io.kubernetes.container.hash: 5d71ee9b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdd0d20b792df5e0497739aba62e186402cd11d32c0a2988ed24b1f2ace27a3c,PodSandboxId:3ee15b5371588bd3f8b73ada9d94daaf1046c4e512d0076d8c39b20ee674f83e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713228620809023993,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc5732aab404fc1b4841930343b61cb,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fd8d3fdd22be343ea28083f999cf7e70c47249ed2d6e6a7745824eebfb904d,PodSandboxId:33ac0a4e4541318ea9b7e28e9fcec07ccfa2234c856573ddf477c97e866038d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713228598600470541,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rkc8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d46a68d-3a90-4dd9-8a56-1ad2704d2727,},Annotations:map[string]string{io.kubernetes.container.hash: 18574667,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9031b12c4dca2e17bac84072b619f64cae9ba5bac739160f9b815ef05e053dd1,PodSandboxId:f8c37f29927d25c0fd2e57a94a9952b3d89b034bfb5c7ac10698f33f5f55f464,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713228597796570208,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-7db6d8ff4d-cq9m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2a3f73c-541d-47cc-93f9-9a5830fc41b5,},Annotations:map[string]string{io.kubernetes.container.hash: ccf94f13,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ad3827fbc46cc40a4ba8d073424ffe035e6c2c54d9f49b5461eb9dfd7bc887,PodSandboxId:3af7656054702796bf637ac98f4d7953d796708898d2937d47439bd09f4a11dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713228597765595202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cthhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b41d80-ded7-4d88-bdab-04ff2cd4f4f1,},Annotations:map[string]string{io.kubernetes.container.hash: c91520ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88897e3aafa04eb039597f9a8f933a6d0dc6582fd274160637cd9ef34a2ee125,PodSandboxId:0cdcdc1d26b5da95a6c5ce450d505c4f90f70cad421c465900349b72feeedcb2,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713228597294730177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c3c90cf-ec34-4097-a603-f312a5624496,},Annotations:map[string]string{io.kubernetes.container.hash: dea44047,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460f403a1726d5daef35f87c148076d38bd2fcb0233873128787dcae4a86b99a,PodSandboxId:fb80eaba242c22028e1e37c194b1b1685ae1db31196a30970245bc36d0dd3f09,Metadata:&ContainerMetadata{
Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713228577640369063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc5732aab404fc1b4841930343b61cb,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f520c4c350e9f56c70c43314ebcba34563bafa0a8b0b030746ce453bdfc337d,PodSandboxId:941128bf02a0cdcffd8fb3935eca6689e9329791b91f5d10364251ca0c6c05d3,Metadat
a:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713228577620792575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2829e4d0698ea0053187057bd461daf5,},Annotations:map[string]string{io.kubernetes.container.hash: 80ecc69d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c0226492e7eb8142e4645002e5b127b7a46bb8431802767af86790298ecdf2,PodSandboxId:be01324e551c4d4f88be32d4ce231ee5cb27e7ae7338d7ee83aada881e3f1bcd,Metadata:&ContainerMetadata{Name:kube-sched
uler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713228577598506836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a302621bee944cdf682c38dd80eae459,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1e03c16f2cb388bd4b3ccb1cc7631c9f62b0f8e1fd63793706b247a3f1a7e52,PodSandboxId:d1beef626da111eae7d44ad7cc29fd986c5e5a5d60922be16e3fb8d812f99a0e,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713228577541097437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c2822c3050d506f0c6e5ae555becfb6,},Annotations:map[string]string{io.kubernetes.container.hash: 5d71ee9b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4c0e976b-4fac-443e-8a98-087ee100da43 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.052258397Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9943f8c1-3df7-46c4-a868-9ff2f594bc15 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.052419679Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9943f8c1-3df7-46c4-a868-9ff2f594bc15 name=/runtime.v1.RuntimeService/Version
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.058699902Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2e518d0b-99e5-4e8a-a61e-3eea83e2ec67 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.059142110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713228629059111474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124378,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e518d0b-99e5-4e8a-a61e-3eea83e2ec67 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.060027828Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1182818-a26d-4993-9397-366c5e8dee3e name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.060113913Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1182818-a26d-4993-9397-366c5e8dee3e name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.060699432Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0bc87c90ab088fd9160e3be5285dfbad9254359e42e41254309789a37b7f4fe,PodSandboxId:36d201ce25db6cf474f6fb528e2ab4bad4797117e84fe4b1ce22c16db532ad38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713228626317841857,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cthhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b41d80-ded7-4d88-bdab-04ff2cd4f4f1,},Annotations:map[string]string{io.kubernetes.container.hash: c91520ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:626d5e0646ed22bb41464f4c2c4c9bd851852d7d4bbde32d674129c90b4eb8aa,PodSandboxId:2b26d7fe01f1f59187792b3fc3a7cb10a727210e3670588cd44594281ac62762,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713228626323707364,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq9m7,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b2a3f73c-541d-47cc-93f9-9a5830fc41b5,},Annotations:map[string]string{io.kubernetes.container.hash: ccf94f13,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552faf07d248424277fbc2b83093318635170698bfacf769b3ec2152508bbcdd,PodSandboxId:15f9ee2aec82da5f3f5357714e315006dc9a77f39d71a2cab845e5602f64b604,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAIN
ER_RUNNING,CreatedAt:1713228625813774340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rkc8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d46a68d-3a90-4dd9-8a56-1ad2704d2727,},Annotations:map[string]string{io.kubernetes.container.hash: 18574667,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac69e482d6c1abea0801a6808c5fe41126cab364f1e41ae18a06b9d12a2e4c07,PodSandboxId:26afa0c121add30d4f289ae6365b69a49e521847768accf8e2bf67a9af650b64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171
3228625691161103,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c3c90cf-ec34-4097-a603-f312a5624496,},Annotations:map[string]string{io.kubernetes.container.hash: dea44047,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91d42e2a6395bab22756528ab35c855d64027e69450f895f4761686034376fb1,PodSandboxId:0f21d77108b4a06fb8e99953d7ee5668fa76c2232d378cd460c83e69dee48471,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713228620883635448,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2829e4d0698ea0053187057bd461daf5,},Annotations:map[string]string{io.kubernetes.container.hash: 80ecc69d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00d3f1e024b69678d201f5e5cfff2c00341de33dee9fc65b4cc05586dadef0c5,PodSandboxId:0f035f362baa607660ce92c1bbf672f48927b10c33ff059f7a7625c0c37c115b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713228620927980177,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a302621bee944cdf682c38dd80eae459,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3d722ab3edb87f6404c922ba91670e54cb76c12b0b73c44d618d6391ea32d3,PodSandboxId:a8e9f1a6a9c5bcc76d669bded8b8e8a8f044d9c1a59d626548c63422c9fa6f51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713228620858631922,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c2822c3050d506f0c6e5ae555becfb6,},Annotations:map[string]string{io.kubernetes.container.hash: 5d71ee9b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdd0d20b792df5e0497739aba62e186402cd11d32c0a2988ed24b1f2ace27a3c,PodSandboxId:3ee15b5371588bd3f8b73ada9d94daaf1046c4e512d0076d8c39b20ee674f83e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713228620809023993,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc5732aab404fc1b4841930343b61cb,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fd8d3fdd22be343ea28083f999cf7e70c47249ed2d6e6a7745824eebfb904d,PodSandboxId:33ac0a4e4541318ea9b7e28e9fcec07ccfa2234c856573ddf477c97e866038d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713228598600470541,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rkc8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d46a68d-3a90-4dd9-8a56-1ad2704d2727,},Annotations:map[string]string{io.kubernetes.container.hash: 18574667,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9031b12c4dca2e17bac84072b619f64cae9ba5bac739160f9b815ef05e053dd1,PodSandboxId:f8c37f29927d25c0fd2e57a94a9952b3d89b034bfb5c7ac10698f33f5f55f464,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713228597796570208,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-7db6d8ff4d-cq9m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2a3f73c-541d-47cc-93f9-9a5830fc41b5,},Annotations:map[string]string{io.kubernetes.container.hash: ccf94f13,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ad3827fbc46cc40a4ba8d073424ffe035e6c2c54d9f49b5461eb9dfd7bc887,PodSandboxId:3af7656054702796bf637ac98f4d7953d796708898d2937d47439bd09f4a11dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713228597765595202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cthhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b41d80-ded7-4d88-bdab-04ff2cd4f4f1,},Annotations:map[string]string{io.kubernetes.container.hash: c91520ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88897e3aafa04eb039597f9a8f933a6d0dc6582fd274160637cd9ef34a2ee125,PodSandboxId:0cdcdc1d26b5da95a6c5ce450d505c4f90f70cad421c465900349b72feeedcb2,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713228597294730177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c3c90cf-ec34-4097-a603-f312a5624496,},Annotations:map[string]string{io.kubernetes.container.hash: dea44047,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460f403a1726d5daef35f87c148076d38bd2fcb0233873128787dcae4a86b99a,PodSandboxId:fb80eaba242c22028e1e37c194b1b1685ae1db31196a30970245bc36d0dd3f09,Metadata:&ContainerMetadata{
Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713228577640369063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc5732aab404fc1b4841930343b61cb,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f520c4c350e9f56c70c43314ebcba34563bafa0a8b0b030746ce453bdfc337d,PodSandboxId:941128bf02a0cdcffd8fb3935eca6689e9329791b91f5d10364251ca0c6c05d3,Metadat
a:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713228577620792575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2829e4d0698ea0053187057bd461daf5,},Annotations:map[string]string{io.kubernetes.container.hash: 80ecc69d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c0226492e7eb8142e4645002e5b127b7a46bb8431802767af86790298ecdf2,PodSandboxId:be01324e551c4d4f88be32d4ce231ee5cb27e7ae7338d7ee83aada881e3f1bcd,Metadata:&ContainerMetadata{Name:kube-sched
uler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713228577598506836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a302621bee944cdf682c38dd80eae459,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1e03c16f2cb388bd4b3ccb1cc7631c9f62b0f8e1fd63793706b247a3f1a7e52,PodSandboxId:d1beef626da111eae7d44ad7cc29fd986c5e5a5d60922be16e3fb8d812f99a0e,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713228577541097437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c2822c3050d506f0c6e5ae555becfb6,},Annotations:map[string]string{io.kubernetes.container.hash: 5d71ee9b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1182818-a26d-4993-9397-366c5e8dee3e name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.106004462Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9611232e-aa8e-434f-9f48-3922e3c1497f name=/runtime.v1.RuntimeService/Version
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.106108556Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9611232e-aa8e-434f-9f48-3922e3c1497f name=/runtime.v1.RuntimeService/Version
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.113369758Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4c0e3b1b-9e02-48f7-97e3-3259858aed02 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.113999004Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713228629113972342,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124378,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c0e3b1b-9e02-48f7-97e3-3259858aed02 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.114959616Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b5216288-4c45-4a13-a8aa-c3c761568151 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.115042267Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b5216288-4c45-4a13-a8aa-c3c761568151 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.115563345Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0bc87c90ab088fd9160e3be5285dfbad9254359e42e41254309789a37b7f4fe,PodSandboxId:36d201ce25db6cf474f6fb528e2ab4bad4797117e84fe4b1ce22c16db532ad38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713228626317841857,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cthhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b41d80-ded7-4d88-bdab-04ff2cd4f4f1,},Annotations:map[string]string{io.kubernetes.container.hash: c91520ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:626d5e0646ed22bb41464f4c2c4c9bd851852d7d4bbde32d674129c90b4eb8aa,PodSandboxId:2b26d7fe01f1f59187792b3fc3a7cb10a727210e3670588cd44594281ac62762,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713228626323707364,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq9m7,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b2a3f73c-541d-47cc-93f9-9a5830fc41b5,},Annotations:map[string]string{io.kubernetes.container.hash: ccf94f13,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552faf07d248424277fbc2b83093318635170698bfacf769b3ec2152508bbcdd,PodSandboxId:15f9ee2aec82da5f3f5357714e315006dc9a77f39d71a2cab845e5602f64b604,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAIN
ER_RUNNING,CreatedAt:1713228625813774340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rkc8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d46a68d-3a90-4dd9-8a56-1ad2704d2727,},Annotations:map[string]string{io.kubernetes.container.hash: 18574667,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac69e482d6c1abea0801a6808c5fe41126cab364f1e41ae18a06b9d12a2e4c07,PodSandboxId:26afa0c121add30d4f289ae6365b69a49e521847768accf8e2bf67a9af650b64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171
3228625691161103,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c3c90cf-ec34-4097-a603-f312a5624496,},Annotations:map[string]string{io.kubernetes.container.hash: dea44047,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91d42e2a6395bab22756528ab35c855d64027e69450f895f4761686034376fb1,PodSandboxId:0f21d77108b4a06fb8e99953d7ee5668fa76c2232d378cd460c83e69dee48471,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713228620883635448,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2829e4d0698ea0053187057bd461daf5,},Annotations:map[string]string{io.kubernetes.container.hash: 80ecc69d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00d3f1e024b69678d201f5e5cfff2c00341de33dee9fc65b4cc05586dadef0c5,PodSandboxId:0f035f362baa607660ce92c1bbf672f48927b10c33ff059f7a7625c0c37c115b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713228620927980177,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a302621bee944cdf682c38dd80eae459,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3d722ab3edb87f6404c922ba91670e54cb76c12b0b73c44d618d6391ea32d3,PodSandboxId:a8e9f1a6a9c5bcc76d669bded8b8e8a8f044d9c1a59d626548c63422c9fa6f51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713228620858631922,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c2822c3050d506f0c6e5ae555becfb6,},Annotations:map[string]string{io.kubernetes.container.hash: 5d71ee9b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdd0d20b792df5e0497739aba62e186402cd11d32c0a2988ed24b1f2ace27a3c,PodSandboxId:3ee15b5371588bd3f8b73ada9d94daaf1046c4e512d0076d8c39b20ee674f83e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713228620809023993,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc5732aab404fc1b4841930343b61cb,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fd8d3fdd22be343ea28083f999cf7e70c47249ed2d6e6a7745824eebfb904d,PodSandboxId:33ac0a4e4541318ea9b7e28e9fcec07ccfa2234c856573ddf477c97e866038d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713228598600470541,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rkc8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d46a68d-3a90-4dd9-8a56-1ad2704d2727,},Annotations:map[string]string{io.kubernetes.container.hash: 18574667,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9031b12c4dca2e17bac84072b619f64cae9ba5bac739160f9b815ef05e053dd1,PodSandboxId:f8c37f29927d25c0fd2e57a94a9952b3d89b034bfb5c7ac10698f33f5f55f464,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713228597796570208,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-7db6d8ff4d-cq9m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2a3f73c-541d-47cc-93f9-9a5830fc41b5,},Annotations:map[string]string{io.kubernetes.container.hash: ccf94f13,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ad3827fbc46cc40a4ba8d073424ffe035e6c2c54d9f49b5461eb9dfd7bc887,PodSandboxId:3af7656054702796bf637ac98f4d7953d796708898d2937d47439bd09f4a11dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713228597765595202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cthhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b41d80-ded7-4d88-bdab-04ff2cd4f4f1,},Annotations:map[string]string{io.kubernetes.container.hash: c91520ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88897e3aafa04eb039597f9a8f933a6d0dc6582fd274160637cd9ef34a2ee125,PodSandboxId:0cdcdc1d26b5da95a6c5ce450d505c4f90f70cad421c465900349b72feeedcb2,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713228597294730177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c3c90cf-ec34-4097-a603-f312a5624496,},Annotations:map[string]string{io.kubernetes.container.hash: dea44047,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460f403a1726d5daef35f87c148076d38bd2fcb0233873128787dcae4a86b99a,PodSandboxId:fb80eaba242c22028e1e37c194b1b1685ae1db31196a30970245bc36d0dd3f09,Metadata:&ContainerMetadata{
Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713228577640369063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc5732aab404fc1b4841930343b61cb,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f520c4c350e9f56c70c43314ebcba34563bafa0a8b0b030746ce453bdfc337d,PodSandboxId:941128bf02a0cdcffd8fb3935eca6689e9329791b91f5d10364251ca0c6c05d3,Metadat
a:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713228577620792575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2829e4d0698ea0053187057bd461daf5,},Annotations:map[string]string{io.kubernetes.container.hash: 80ecc69d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c0226492e7eb8142e4645002e5b127b7a46bb8431802767af86790298ecdf2,PodSandboxId:be01324e551c4d4f88be32d4ce231ee5cb27e7ae7338d7ee83aada881e3f1bcd,Metadata:&ContainerMetadata{Name:kube-sched
uler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713228577598506836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a302621bee944cdf682c38dd80eae459,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1e03c16f2cb388bd4b3ccb1cc7631c9f62b0f8e1fd63793706b247a3f1a7e52,PodSandboxId:d1beef626da111eae7d44ad7cc29fd986c5e5a5d60922be16e3fb8d812f99a0e,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713228577541097437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c2822c3050d506f0c6e5ae555becfb6,},Annotations:map[string]string{io.kubernetes.container.hash: 5d71ee9b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b5216288-4c45-4a13-a8aa-c3c761568151 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.154191341Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ab1b083e-e5ba-47f7-bb75-4d5e5b87fcfc name=/runtime.v1.RuntimeService/Version
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.154268371Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ab1b083e-e5ba-47f7-bb75-4d5e5b87fcfc name=/runtime.v1.RuntimeService/Version
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.156328621Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a07782f7-d28c-403d-8691-eff7ec1cc539 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.156709005Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713228629156687098,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124378,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a07782f7-d28c-403d-8691-eff7ec1cc539 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.157389674Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=adbf0cf4-6562-41f0-8b73-2377a2746486 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.157473812Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=adbf0cf4-6562-41f0-8b73-2377a2746486 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 00:50:29 kubernetes-upgrade-497059 crio[2260]: time="2024-04-16 00:50:29.157858254Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d0bc87c90ab088fd9160e3be5285dfbad9254359e42e41254309789a37b7f4fe,PodSandboxId:36d201ce25db6cf474f6fb528e2ab4bad4797117e84fe4b1ce22c16db532ad38,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713228626317841857,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cthhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b41d80-ded7-4d88-bdab-04ff2cd4f4f1,},Annotations:map[string]string{io.kubernetes.container.hash: c91520ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:626d5e0646ed22bb41464f4c2c4c9bd851852d7d4bbde32d674129c90b4eb8aa,PodSandboxId:2b26d7fe01f1f59187792b3fc3a7cb10a727210e3670588cd44594281ac62762,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713228626323707364,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cq9m7,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b2a3f73c-541d-47cc-93f9-9a5830fc41b5,},Annotations:map[string]string{io.kubernetes.container.hash: ccf94f13,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:552faf07d248424277fbc2b83093318635170698bfacf769b3ec2152508bbcdd,PodSandboxId:15f9ee2aec82da5f3f5357714e315006dc9a77f39d71a2cab845e5602f64b604,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAIN
ER_RUNNING,CreatedAt:1713228625813774340,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rkc8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d46a68d-3a90-4dd9-8a56-1ad2704d2727,},Annotations:map[string]string{io.kubernetes.container.hash: 18574667,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac69e482d6c1abea0801a6808c5fe41126cab364f1e41ae18a06b9d12a2e4c07,PodSandboxId:26afa0c121add30d4f289ae6365b69a49e521847768accf8e2bf67a9af650b64,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171
3228625691161103,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c3c90cf-ec34-4097-a603-f312a5624496,},Annotations:map[string]string{io.kubernetes.container.hash: dea44047,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91d42e2a6395bab22756528ab35c855d64027e69450f895f4761686034376fb1,PodSandboxId:0f21d77108b4a06fb8e99953d7ee5668fa76c2232d378cd460c83e69dee48471,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713228620883635448,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2829e4d0698ea0053187057bd461daf5,},Annotations:map[string]string{io.kubernetes.container.hash: 80ecc69d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00d3f1e024b69678d201f5e5cfff2c00341de33dee9fc65b4cc05586dadef0c5,PodSandboxId:0f035f362baa607660ce92c1bbf672f48927b10c33ff059f7a7625c0c37c115b,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713228620927980177,Labels:map[string]string{io.kuberne
tes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a302621bee944cdf682c38dd80eae459,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d3d722ab3edb87f6404c922ba91670e54cb76c12b0b73c44d618d6391ea32d3,PodSandboxId:a8e9f1a6a9c5bcc76d669bded8b8e8a8f044d9c1a59d626548c63422c9fa6f51,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713228620858631922,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c2822c3050d506f0c6e5ae555becfb6,},Annotations:map[string]string{io.kubernetes.container.hash: 5d71ee9b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdd0d20b792df5e0497739aba62e186402cd11d32c0a2988ed24b1f2ace27a3c,PodSandboxId:3ee15b5371588bd3f8b73ada9d94daaf1046c4e512d0076d8c39b20ee674f83e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713228620809023993,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc5732aab404fc1b4841930343b61cb,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:60fd8d3fdd22be343ea28083f999cf7e70c47249ed2d6e6a7745824eebfb904d,PodSandboxId:33ac0a4e4541318ea9b7e28e9fcec07ccfa2234c856573ddf477c97e866038d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_EXITED,CreatedAt:1713228598600470541,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rkc8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8d46a68d-3a90-4dd9-8a56-1ad2704d2727,},Annotations:map[string]string{io.kubernetes.container.hash: 18574667,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9031b12c4dca2e17bac84072b619f64cae9ba5bac739160f9b815ef05e053dd1,PodSandboxId:f8c37f29927d25c0fd2e57a94a9952b3d89b034bfb5c7ac10698f33f5f55f464,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713228597796570208,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-7db6d8ff4d-cq9m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2a3f73c-541d-47cc-93f9-9a5830fc41b5,},Annotations:map[string]string{io.kubernetes.container.hash: ccf94f13,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57ad3827fbc46cc40a4ba8d073424ffe035e6c2c54d9f49b5461eb9dfd7bc887,PodSandboxId:3af7656054702796bf637ac98f4d7953d796708898d2937d47439bd09f4a11dd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,
},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1713228597765595202,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cthhq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 75b41d80-ded7-4d88-bdab-04ff2cd4f4f1,},Annotations:map[string]string{io.kubernetes.container.hash: c91520ab,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88897e3aafa04eb039597f9a8f933a6d0dc6582fd274160637cd9ef34a2ee125,PodSandboxId:0cdcdc1d26b5da95a6c5ce450d505c4f90f70cad421c465900349b72feeedcb2,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1713228597294730177,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c3c90cf-ec34-4097-a603-f312a5624496,},Annotations:map[string]string{io.kubernetes.container.hash: dea44047,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460f403a1726d5daef35f87c148076d38bd2fcb0233873128787dcae4a86b99a,PodSandboxId:fb80eaba242c22028e1e37c194b1b1685ae1db31196a30970245bc36d0dd3f09,Metadata:&ContainerMetadata{
Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_EXITED,CreatedAt:1713228577640369063,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebc5732aab404fc1b4841930343b61cb,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f520c4c350e9f56c70c43314ebcba34563bafa0a8b0b030746ce453bdfc337d,PodSandboxId:941128bf02a0cdcffd8fb3935eca6689e9329791b91f5d10364251ca0c6c05d3,Metadat
a:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1713228577620792575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2829e4d0698ea0053187057bd461daf5,},Annotations:map[string]string{io.kubernetes.container.hash: 80ecc69d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46c0226492e7eb8142e4645002e5b127b7a46bb8431802767af86790298ecdf2,PodSandboxId:be01324e551c4d4f88be32d4ce231ee5cb27e7ae7338d7ee83aada881e3f1bcd,Metadata:&ContainerMetadata{Name:kube-sched
uler,Attempt:0,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_EXITED,CreatedAt:1713228577598506836,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a302621bee944cdf682c38dd80eae459,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1e03c16f2cb388bd4b3ccb1cc7631c9f62b0f8e1fd63793706b247a3f1a7e52,PodSandboxId:d1beef626da111eae7d44ad7cc29fd986c5e5a5d60922be16e3fb8d812f99a0e,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_EXITED,CreatedAt:1713228577541097437,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-497059,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c2822c3050d506f0c6e5ae555becfb6,},Annotations:map[string]string{io.kubernetes.container.hash: 5d71ee9b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=adbf0cf4-6562-41f0-8b73-2377a2746486 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	626d5e0646ed2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 seconds ago       Running             coredns                   1                   2b26d7fe01f1f       coredns-7db6d8ff4d-cq9m7
	d0bc87c90ab08       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 seconds ago       Running             coredns                   1                   36d201ce25db6       coredns-7db6d8ff4d-cthhq
	552faf07d2484       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e   3 seconds ago       Running             kube-proxy                1                   15f9ee2aec82d       kube-proxy-rkc8x
	ac69e482d6c1a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       1                   26afa0c121add       storage-provisioner
	00d3f1e024b69       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6   8 seconds ago       Running             kube-scheduler            1                   0f035f362baa6       kube-scheduler-kubernetes-upgrade-497059
	91d42e2a6395b       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   8 seconds ago       Running             etcd                      1                   0f21d77108b4a       etcd-kubernetes-upgrade-497059
	2d3d722ab3edb       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1   8 seconds ago       Running             kube-apiserver            1                   a8e9f1a6a9c5b       kube-apiserver-kubernetes-upgrade-497059
	cdd0d20b792df       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b   8 seconds ago       Running             kube-controller-manager   1                   3ee15b5371588       kube-controller-manager-kubernetes-upgrade-497059
	60fd8d3fdd22b       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e   30 seconds ago      Exited              kube-proxy                0                   33ac0a4e45413       kube-proxy-rkc8x
	9031b12c4dca2       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   31 seconds ago      Exited              coredns                   0                   f8c37f29927d2       coredns-7db6d8ff4d-cq9m7
	57ad3827fbc46       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   31 seconds ago      Exited              coredns                   0                   3af7656054702       coredns-7db6d8ff4d-cthhq
	88897e3aafa04       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   31 seconds ago      Exited              storage-provisioner       0                   0cdcdc1d26b5d       storage-provisioner
	460f403a1726d       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b   51 seconds ago      Exited              kube-controller-manager   0                   fb80eaba242c2       kube-controller-manager-kubernetes-upgrade-497059
	6f520c4c350e9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   51 seconds ago      Exited              etcd                      0                   941128bf02a0c       etcd-kubernetes-upgrade-497059
	46c0226492e7e       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6   51 seconds ago      Exited              kube-scheduler            0                   be01324e551c4       kube-scheduler-kubernetes-upgrade-497059
	b1e03c16f2cb3       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1   51 seconds ago      Exited              kube-apiserver            0                   d1beef626da11       kube-apiserver-kubernetes-upgrade-497059
	
	
	==> coredns [57ad3827fbc46cc40a4ba8d073424ffe035e6c2c54d9f49b5461eb9dfd7bc887] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [626d5e0646ed22bb41464f4c2c4c9bd851852d7d4bbde32d674129c90b4eb8aa] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [9031b12c4dca2e17bac84072b619f64cae9ba5bac739160f9b815ef05e053dd1] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d0bc87c90ab088fd9160e3be5285dfbad9254359e42e41254309789a37b7f4fe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-497059
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-497059
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 00:49:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-497059
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 00:50:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 00:50:24 +0000   Tue, 16 Apr 2024 00:49:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 00:50:24 +0000   Tue, 16 Apr 2024 00:49:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 00:50:24 +0000   Tue, 16 Apr 2024 00:49:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 00:50:24 +0000   Tue, 16 Apr 2024 00:49:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.223
	  Hostname:    kubernetes-upgrade-497059
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8e1c8477d88d4d709adff49cf6b95e45
	  System UUID:                8e1c8477-d88d-4d70-9adf-f49cf6b95e45
	  Boot ID:                    3dba38e3-aeae-4404-9fd8-1d58edb194d3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-cq9m7                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     32s
	  kube-system                 coredns-7db6d8ff4d-cthhq                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     32s
	  kube-system                 etcd-kubernetes-upgrade-497059                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         46s
	  kube-system                 kube-apiserver-kubernetes-upgrade-497059             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-497059    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                 kube-proxy-rkc8x                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 kube-scheduler-kubernetes-upgrade-497059             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 30s                kube-proxy       
	  Normal  NodeAllocatableEnforced  53s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 53s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    52s (x8 over 53s)  kubelet          Node kubernetes-upgrade-497059 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x7 over 53s)  kubelet          Node kubernetes-upgrade-497059 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  52s (x8 over 53s)  kubelet          Node kubernetes-upgrade-497059 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           33s                node-controller  Node kubernetes-upgrade-497059 event: Registered Node kubernetes-upgrade-497059 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-497059 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-497059 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-497059 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.012813] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.061423] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.078002] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.178562] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.170196] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.295355] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +4.960123] systemd-fstab-generator[738]: Ignoring "noauto" option for root device
	[  +0.073178] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.266845] systemd-fstab-generator[861]: Ignoring "noauto" option for root device
	[  +7.757884] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	[  +0.091958] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.426403] kauditd_printk_skb: 18 callbacks suppressed
	[Apr16 00:50] systemd-fstab-generator[2180]: Ignoring "noauto" option for root device
	[  +0.093973] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.072239] systemd-fstab-generator[2192]: Ignoring "noauto" option for root device
	[  +0.239365] systemd-fstab-generator[2206]: Ignoring "noauto" option for root device
	[  +0.156617] systemd-fstab-generator[2218]: Ignoring "noauto" option for root device
	[  +0.323668] systemd-fstab-generator[2246]: Ignoring "noauto" option for root device
	[  +6.941929] systemd-fstab-generator[2400]: Ignoring "noauto" option for root device
	[  +0.091110] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.180583] systemd-fstab-generator[2525]: Ignoring "noauto" option for root device
	[  +5.580102] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.573329] systemd-fstab-generator[3432]: Ignoring "noauto" option for root device
	
	
	==> etcd [6f520c4c350e9f56c70c43314ebcba34563bafa0a8b0b030746ce453bdfc337d] <==
	{"level":"info","ts":"2024-04-16T00:49:38.293552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fca996f6024692d4 became candidate at term 2"}
	{"level":"info","ts":"2024-04-16T00:49:38.29356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fca996f6024692d4 received MsgVoteResp from fca996f6024692d4 at term 2"}
	{"level":"info","ts":"2024-04-16T00:49:38.293571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fca996f6024692d4 became leader at term 2"}
	{"level":"info","ts":"2024-04-16T00:49:38.293609Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fca996f6024692d4 elected leader fca996f6024692d4 at term 2"}
	{"level":"info","ts":"2024-04-16T00:49:38.297588Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T00:49:38.299707Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fca996f6024692d4","local-member-attributes":"{Name:kubernetes-upgrade-497059 ClientURLs:[https://192.168.50.223:2379]}","request-path":"/0/members/fca996f6024692d4/attributes","cluster-id":"5d97db437f20b177","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T00:49:38.302399Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5d97db437f20b177","local-member-id":"fca996f6024692d4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T00:49:38.302518Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T00:49:38.30258Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T00:49:38.302624Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T00:49:38.302983Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T00:49:38.315441Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T00:49:38.315514Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T00:49:38.335071Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T00:49:38.338652Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.223:2379"}
	{"level":"info","ts":"2024-04-16T00:50:00.803022Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-04-16T00:50:00.803172Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"kubernetes-upgrade-497059","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.223:2380"],"advertise-client-urls":["https://192.168.50.223:2379"]}
	{"level":"warn","ts":"2024-04-16T00:50:00.803372Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T00:50:00.803541Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T00:50:00.861943Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.223:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-04-16T00:50:00.862005Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.223:2379: use of closed network connection"}
	{"level":"info","ts":"2024-04-16T00:50:00.862066Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"fca996f6024692d4","current-leader-member-id":"fca996f6024692d4"}
	{"level":"info","ts":"2024-04-16T00:50:02.054422Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.223:2380"}
	{"level":"info","ts":"2024-04-16T00:50:02.05499Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.223:2380"}
	{"level":"info","ts":"2024-04-16T00:50:02.055082Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"kubernetes-upgrade-497059","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.223:2380"],"advertise-client-urls":["https://192.168.50.223:2379"]}
	
	
	==> etcd [91d42e2a6395bab22756528ab35c855d64027e69450f895f4761686034376fb1] <==
	{"level":"info","ts":"2024-04-16T00:50:21.373019Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T00:50:21.37303Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-04-16T00:50:21.373258Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fca996f6024692d4 switched to configuration voters=(18206248951966241492)"}
	{"level":"info","ts":"2024-04-16T00:50:21.381533Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"5d97db437f20b177","local-member-id":"fca996f6024692d4","added-peer-id":"fca996f6024692d4","added-peer-peer-urls":["https://192.168.50.223:2380"]}
	{"level":"info","ts":"2024-04-16T00:50:21.381748Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"5d97db437f20b177","local-member-id":"fca996f6024692d4","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T00:50:21.381807Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T00:50:21.430823Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-16T00:50:21.431124Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fca996f6024692d4","initial-advertise-peer-urls":["https://192.168.50.223:2380"],"listen-peer-urls":["https://192.168.50.223:2380"],"advertise-client-urls":["https://192.168.50.223:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.223:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T00:50:21.431184Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T00:50:21.431401Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.223:2380"}
	{"level":"info","ts":"2024-04-16T00:50:21.431444Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.223:2380"}
	{"level":"info","ts":"2024-04-16T00:50:23.249132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fca996f6024692d4 is starting a new election at term 2"}
	{"level":"info","ts":"2024-04-16T00:50:23.249474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fca996f6024692d4 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-04-16T00:50:23.249531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fca996f6024692d4 received MsgPreVoteResp from fca996f6024692d4 at term 2"}
	{"level":"info","ts":"2024-04-16T00:50:23.249563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fca996f6024692d4 became candidate at term 3"}
	{"level":"info","ts":"2024-04-16T00:50:23.249587Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fca996f6024692d4 received MsgVoteResp from fca996f6024692d4 at term 3"}
	{"level":"info","ts":"2024-04-16T00:50:23.249614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fca996f6024692d4 became leader at term 3"}
	{"level":"info","ts":"2024-04-16T00:50:23.24964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fca996f6024692d4 elected leader fca996f6024692d4 at term 3"}
	{"level":"info","ts":"2024-04-16T00:50:23.2559Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T00:50:23.255878Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fca996f6024692d4","local-member-attributes":"{Name:kubernetes-upgrade-497059 ClientURLs:[https://192.168.50.223:2379]}","request-path":"/0/members/fca996f6024692d4/attributes","cluster-id":"5d97db437f20b177","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T00:50:23.256371Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T00:50:23.256677Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T00:50:23.256719Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T00:50:23.25826Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.223:2379"}
	{"level":"info","ts":"2024-04-16T00:50:23.258616Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 00:50:29 up 1 min,  0 users,  load average: 1.00, 0.29, 0.10
	Linux kubernetes-upgrade-497059 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2d3d722ab3edb87f6404c922ba91670e54cb76c12b0b73c44d618d6391ea32d3] <==
	I0416 00:50:24.654548       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0416 00:50:24.696688       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0416 00:50:24.696756       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0416 00:50:24.696764       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0416 00:50:24.698549       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0416 00:50:24.717426       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0416 00:50:24.719644       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0416 00:50:24.719726       1 policy_source.go:224] refreshing policies
	I0416 00:50:24.723832       1 shared_informer.go:320] Caches are synced for configmaps
	I0416 00:50:24.734140       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0416 00:50:24.754602       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0416 00:50:24.754902       1 aggregator.go:165] initial CRD sync complete...
	I0416 00:50:24.754957       1 autoregister_controller.go:141] Starting autoregister controller
	I0416 00:50:24.754981       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0416 00:50:24.755004       1 cache.go:39] Caches are synced for autoregister controller
	I0416 00:50:24.757605       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0416 00:50:24.760322       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0416 00:50:24.770447       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0416 00:50:25.623790       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0416 00:50:26.045852       1 controller.go:615] quota admission added evaluator for: endpoints
	I0416 00:50:26.753444       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0416 00:50:26.784009       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0416 00:50:26.842459       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0416 00:50:26.885867       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0416 00:50:26.907234       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [b1e03c16f2cb388bd4b3ccb1cc7631c9f62b0f8e1fd63793706b247a3f1a7e52] <==
	W0416 00:50:01.840588       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.840468       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.840194       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.840727       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.840787       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.840849       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.840900       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.840969       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.841014       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.841132       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.841140       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.841208       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.841333       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.841366       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.841338       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.842659       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.842719       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.842738       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.844138       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.844199       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.844216       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.844232       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.844249       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.844266       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 00:50:01.845563       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [460f403a1726d5daef35f87c148076d38bd2fcb0233873128787dcae4a86b99a] <==
	I0416 00:49:56.255733       1 shared_informer.go:320] Caches are synced for job
	I0416 00:49:56.255823       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0416 00:49:56.255843       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0416 00:49:56.256014       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0416 00:49:56.256067       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0416 00:49:56.256079       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0416 00:49:56.256096       1 shared_informer.go:320] Caches are synced for service account
	I0416 00:49:56.256220       1 shared_informer.go:320] Caches are synced for ephemeral
	I0416 00:49:56.256375       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0416 00:49:56.312176       1 shared_informer.go:320] Caches are synced for HPA
	I0416 00:49:56.348330       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0416 00:49:56.404822       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0416 00:49:56.430245       1 shared_informer.go:320] Caches are synced for resource quota
	I0416 00:49:56.439944       1 shared_informer.go:320] Caches are synced for endpoint
	I0416 00:49:56.448557       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0416 00:49:56.463629       1 shared_informer.go:320] Caches are synced for resource quota
	I0416 00:49:56.877420       1 shared_informer.go:320] Caches are synced for garbage collector
	I0416 00:49:56.886755       1 shared_informer.go:320] Caches are synced for garbage collector
	I0416 00:49:56.886780       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0416 00:49:57.070599       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="698.341377ms"
	I0416 00:49:57.085105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.434265ms"
	I0416 00:49:57.085442       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="223.923µs"
	I0416 00:49:57.091662       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.49µs"
	I0416 00:49:58.022652       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.491µs"
	I0416 00:49:58.049264       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="52.629µs"
	
	
	==> kube-controller-manager [cdd0d20b792df5e0497739aba62e186402cd11d32c0a2988ed24b1f2ace27a3c] <==
	I0416 00:50:26.689064       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0416 00:50:26.689889       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0416 00:50:26.692684       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0416 00:50:26.693038       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0416 00:50:26.693126       1 shared_informer.go:313] Waiting for caches to sync for job
	I0416 00:50:26.706355       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0416 00:50:26.706972       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0416 00:50:26.707103       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0416 00:50:26.707190       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0416 00:50:26.718435       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0416 00:50:26.718628       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0416 00:50:26.718668       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0416 00:50:26.724618       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0416 00:50:26.724916       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0416 00:50:26.725472       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0416 00:50:26.733577       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0416 00:50:26.733826       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0416 00:50:26.733874       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0416 00:50:26.737590       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0416 00:50:26.737845       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0416 00:50:26.737943       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0416 00:50:26.743070       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0416 00:50:26.743371       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0416 00:50:26.743418       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0416 00:50:26.779700       1 shared_informer.go:320] Caches are synced for tokens
	
	
	==> kube-proxy [552faf07d248424277fbc2b83093318635170698bfacf769b3ec2152508bbcdd] <==
	I0416 00:50:26.214652       1 server_linux.go:69] "Using iptables proxy"
	I0416 00:50:26.223695       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.223"]
	I0416 00:50:26.332175       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0416 00:50:26.332222       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 00:50:26.332244       1 server_linux.go:165] "Using iptables Proxier"
	I0416 00:50:26.335874       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 00:50:26.336140       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0416 00:50:26.336158       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 00:50:26.352981       1 config.go:192] "Starting service config controller"
	I0416 00:50:26.353004       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0416 00:50:26.353670       1 config.go:101] "Starting endpoint slice config controller"
	I0416 00:50:26.353678       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0416 00:50:26.354106       1 config.go:319] "Starting node config controller"
	I0416 00:50:26.354117       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0416 00:50:26.454267       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0416 00:50:26.454546       1 shared_informer.go:320] Caches are synced for node config
	I0416 00:50:26.454573       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [60fd8d3fdd22be343ea28083f999cf7e70c47249ed2d6e6a7745824eebfb904d] <==
	I0416 00:49:58.746951       1 server_linux.go:69] "Using iptables proxy"
	I0416 00:49:58.756383       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.223"]
	I0416 00:49:58.802649       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0416 00:49:58.802686       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 00:49:58.802703       1 server_linux.go:165] "Using iptables Proxier"
	I0416 00:49:58.805972       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 00:49:58.806463       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0416 00:49:58.806486       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 00:49:58.808226       1 config.go:192] "Starting service config controller"
	I0416 00:49:58.808344       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0416 00:49:58.808445       1 config.go:101] "Starting endpoint slice config controller"
	I0416 00:49:58.808476       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0416 00:49:58.811766       1 config.go:319] "Starting node config controller"
	I0416 00:49:58.811818       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0416 00:49:58.908651       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0416 00:49:58.908747       1 shared_informer.go:320] Caches are synced for service config
	I0416 00:49:58.912136       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [00d3f1e024b69678d201f5e5cfff2c00341de33dee9fc65b4cc05586dadef0c5] <==
	I0416 00:50:22.202884       1 serving.go:380] Generated self-signed cert in-memory
	W0416 00:50:24.650453       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0416 00:50:24.650587       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 00:50:24.650620       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0416 00:50:24.650700       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0416 00:50:24.678996       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0-rc.2"
	I0416 00:50:24.681385       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 00:50:24.682974       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0416 00:50:24.685474       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0416 00:50:24.685541       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0416 00:50:24.685579       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0416 00:50:24.786735       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [46c0226492e7eb8142e4645002e5b127b7a46bb8431802767af86790298ecdf2] <==
	E0416 00:49:41.967919       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 00:49:42.031774       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 00:49:42.031972       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 00:49:42.067934       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 00:49:42.068028       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 00:49:42.091460       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 00:49:42.091557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0416 00:49:42.203639       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 00:49:42.203666       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0416 00:49:42.226759       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 00:49:42.226845       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 00:49:42.246051       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 00:49:42.246322       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 00:49:42.293502       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 00:49:42.293821       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 00:49:42.336643       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 00:49:42.336693       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 00:49:42.362611       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 00:49:42.362662       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 00:49:42.367755       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 00:49:42.367807       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 00:49:42.369251       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 00:49:42.369375       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0416 00:49:44.220045       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0416 00:50:00.817907       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Apr 16 00:50:20 kubernetes-upgrade-497059 kubelet[2532]: I0416 00:50:20.887044    2532 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-497059"
	Apr 16 00:50:20 kubernetes-upgrade-497059 kubelet[2532]: E0416 00:50:20.887848    2532 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.223:8443: connect: connection refused" node="kubernetes-upgrade-497059"
	Apr 16 00:50:20 kubernetes-upgrade-497059 kubelet[2532]: W0416 00:50:20.998264    2532 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.223:8443: connect: connection refused
	Apr 16 00:50:20 kubernetes-upgrade-497059 kubelet[2532]: E0416 00:50:20.998429    2532 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.223:8443: connect: connection refused
	Apr 16 00:50:21 kubernetes-upgrade-497059 kubelet[2532]: W0416 00:50:21.075537    2532 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.223:8443: connect: connection refused
	Apr 16 00:50:21 kubernetes-upgrade-497059 kubelet[2532]: E0416 00:50:21.075603    2532 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.223:8443: connect: connection refused
	Apr 16 00:50:21 kubernetes-upgrade-497059 kubelet[2532]: W0416 00:50:21.172164    2532 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-497059&limit=500&resourceVersion=0": dial tcp 192.168.50.223:8443: connect: connection refused
	Apr 16 00:50:21 kubernetes-upgrade-497059 kubelet[2532]: E0416 00:50:21.172225    2532 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-497059&limit=500&resourceVersion=0": dial tcp 192.168.50.223:8443: connect: connection refused
	Apr 16 00:50:21 kubernetes-upgrade-497059 kubelet[2532]: W0416 00:50:21.336352    2532 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.223:8443: connect: connection refused
	Apr 16 00:50:21 kubernetes-upgrade-497059 kubelet[2532]: E0416 00:50:21.336652    2532 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.223:8443: connect: connection refused
	Apr 16 00:50:21 kubernetes-upgrade-497059 kubelet[2532]: I0416 00:50:21.689499    2532 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-497059"
	Apr 16 00:50:24 kubernetes-upgrade-497059 kubelet[2532]: I0416 00:50:24.764823    2532 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-497059"
	Apr 16 00:50:24 kubernetes-upgrade-497059 kubelet[2532]: I0416 00:50:24.765055    2532 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-497059"
	Apr 16 00:50:24 kubernetes-upgrade-497059 kubelet[2532]: I0416 00:50:24.771711    2532 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 16 00:50:24 kubernetes-upgrade-497059 kubelet[2532]: I0416 00:50:24.773482    2532 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 16 00:50:25 kubernetes-upgrade-497059 kubelet[2532]: I0416 00:50:25.140611    2532 apiserver.go:52] "Watching apiserver"
	Apr 16 00:50:25 kubernetes-upgrade-497059 kubelet[2532]: I0416 00:50:25.144121    2532 topology_manager.go:215] "Topology Admit Handler" podUID="7c3c90cf-ec34-4097-a603-f312a5624496" podNamespace="kube-system" podName="storage-provisioner"
	Apr 16 00:50:25 kubernetes-upgrade-497059 kubelet[2532]: I0416 00:50:25.144256    2532 topology_manager.go:215] "Topology Admit Handler" podUID="8d46a68d-3a90-4dd9-8a56-1ad2704d2727" podNamespace="kube-system" podName="kube-proxy-rkc8x"
	Apr 16 00:50:25 kubernetes-upgrade-497059 kubelet[2532]: I0416 00:50:25.144362    2532 topology_manager.go:215] "Topology Admit Handler" podUID="b2a3f73c-541d-47cc-93f9-9a5830fc41b5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cq9m7"
	Apr 16 00:50:25 kubernetes-upgrade-497059 kubelet[2532]: I0416 00:50:25.144424    2532 topology_manager.go:215] "Topology Admit Handler" podUID="75b41d80-ded7-4d88-bdab-04ff2cd4f4f1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cthhq"
	Apr 16 00:50:25 kubernetes-upgrade-497059 kubelet[2532]: I0416 00:50:25.167600    2532 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 16 00:50:25 kubernetes-upgrade-497059 kubelet[2532]: I0416 00:50:25.262337    2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d46a68d-3a90-4dd9-8a56-1ad2704d2727-lib-modules\") pod \"kube-proxy-rkc8x\" (UID: \"8d46a68d-3a90-4dd9-8a56-1ad2704d2727\") " pod="kube-system/kube-proxy-rkc8x"
	Apr 16 00:50:25 kubernetes-upgrade-497059 kubelet[2532]: I0416 00:50:25.262463    2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7c3c90cf-ec34-4097-a603-f312a5624496-tmp\") pod \"storage-provisioner\" (UID: \"7c3c90cf-ec34-4097-a603-f312a5624496\") " pod="kube-system/storage-provisioner"
	Apr 16 00:50:25 kubernetes-upgrade-497059 kubelet[2532]: I0416 00:50:25.262530    2532 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d46a68d-3a90-4dd9-8a56-1ad2704d2727-xtables-lock\") pod \"kube-proxy-rkc8x\" (UID: \"8d46a68d-3a90-4dd9-8a56-1ad2704d2727\") " pod="kube-system/kube-proxy-rkc8x"
	Apr 16 00:50:28 kubernetes-upgrade-497059 kubelet[2532]: I0416 00:50:28.444383    2532 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [88897e3aafa04eb039597f9a8f933a6d0dc6582fd274160637cd9ef34a2ee125] <==
	I0416 00:49:57.470092       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	
	==> storage-provisioner [ac69e482d6c1abea0801a6808c5fe41126cab364f1e41ae18a06b9d12a2e4c07] <==
	I0416 00:50:25.984831       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 00:50:26.024219       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 00:50:26.024275       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 00:50:26.063122       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 00:50:26.063429       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-497059_13b96f5c-93e5-444c-bdd1-e33a72b365da!
	I0416 00:50:26.066820       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3b19fcef-b69c-42ac-b108-29e6cd407864", APIVersion:"v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-497059_13b96f5c-93e5-444c-bdd1-e33a72b365da became leader
	I0416 00:50:26.165409       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-497059_13b96f5c-93e5-444c-bdd1-e33a72b365da!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:50:28.617547   57796 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18647-7542/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-497059 -n kubernetes-upgrade-497059
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-497059 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-497059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-497059
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-497059: (1.15196387s)
--- FAIL: TestKubernetesUpgrade (390.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (269.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-800769 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-800769 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m28.925352289s)

                                                
                                                
-- stdout --
	* [old-k8s-version-800769] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-800769" primary control-plane node in "old-k8s-version-800769" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:50:14.742037   57518 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:50:14.742276   57518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:50:14.742287   57518 out.go:304] Setting ErrFile to fd 2...
	I0416 00:50:14.742291   57518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:50:14.742500   57518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:50:14.743128   57518 out.go:298] Setting JSON to false
	I0416 00:50:14.744169   57518 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5559,"bootTime":1713223056,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 00:50:14.744227   57518 start.go:139] virtualization: kvm guest
	I0416 00:50:14.746250   57518 out.go:177] * [old-k8s-version-800769] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 00:50:14.747801   57518 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 00:50:14.747831   57518 notify.go:220] Checking for updates...
	I0416 00:50:14.749303   57518 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 00:50:14.750681   57518 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 00:50:14.752098   57518 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:50:14.753531   57518 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 00:50:14.754639   57518 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 00:50:14.756330   57518 config.go:182] Loaded profile config "cert-expiration-359535": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:50:14.756421   57518 config.go:182] Loaded profile config "kubernetes-upgrade-497059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 00:50:14.756529   57518 config.go:182] Loaded profile config "pause-214771": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:50:14.756626   57518 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 00:50:14.794372   57518 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 00:50:14.795566   57518 start.go:297] selected driver: kvm2
	I0416 00:50:14.795580   57518 start.go:901] validating driver "kvm2" against <nil>
	I0416 00:50:14.795592   57518 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 00:50:14.796324   57518 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:50:14.796398   57518 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 00:50:14.812045   57518 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 00:50:14.812088   57518 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 00:50:14.812270   57518 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 00:50:14.812330   57518 cni.go:84] Creating CNI manager for ""
	I0416 00:50:14.812342   57518 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:50:14.812351   57518 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0416 00:50:14.812399   57518 start.go:340] cluster config:
	{Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:50:14.812489   57518 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:50:14.814187   57518 out.go:177] * Starting "old-k8s-version-800769" primary control-plane node in "old-k8s-version-800769" cluster
	I0416 00:50:14.815477   57518 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 00:50:14.815509   57518 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0416 00:50:14.815519   57518 cache.go:56] Caching tarball of preloaded images
	I0416 00:50:14.815610   57518 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 00:50:14.815621   57518 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0416 00:50:14.815702   57518 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/config.json ...
	I0416 00:50:14.815718   57518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/config.json: {Name:mk188ed0158cdcdef6a943cf87c78b08b315b06c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:50:14.815830   57518 start.go:360] acquireMachinesLock for old-k8s-version-800769: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:50:14.815857   57518 start.go:364] duration metric: took 13.41µs to acquireMachinesLock for "old-k8s-version-800769"
	I0416 00:50:14.815874   57518 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 00:50:14.815956   57518 start.go:125] createHost starting for "" (driver="kvm2")
	I0416 00:50:14.817627   57518 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0416 00:50:14.817754   57518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:50:14.817785   57518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:50:14.832633   57518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37955
	I0416 00:50:14.833110   57518 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:50:14.833640   57518 main.go:141] libmachine: Using API Version  1
	I0416 00:50:14.833678   57518 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:50:14.834081   57518 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:50:14.834342   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:50:14.834529   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:50:14.834720   57518 start.go:159] libmachine.API.Create for "old-k8s-version-800769" (driver="kvm2")
	I0416 00:50:14.834748   57518 client.go:168] LocalClient.Create starting
	I0416 00:50:14.834785   57518 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem
	I0416 00:50:14.834826   57518 main.go:141] libmachine: Decoding PEM data...
	I0416 00:50:14.834847   57518 main.go:141] libmachine: Parsing certificate...
	I0416 00:50:14.834929   57518 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem
	I0416 00:50:14.834956   57518 main.go:141] libmachine: Decoding PEM data...
	I0416 00:50:14.834982   57518 main.go:141] libmachine: Parsing certificate...
	I0416 00:50:14.835007   57518 main.go:141] libmachine: Running pre-create checks...
	I0416 00:50:14.835024   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .PreCreateCheck
	I0416 00:50:14.835568   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetConfigRaw
	I0416 00:50:14.836019   57518 main.go:141] libmachine: Creating machine...
	I0416 00:50:14.836038   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .Create
	I0416 00:50:14.836223   57518 main.go:141] libmachine: (old-k8s-version-800769) Creating KVM machine...
	I0416 00:50:14.837473   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | found existing default KVM network
	I0416 00:50:14.838719   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:14.838562   57541 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f6:f2:02} reservation:<nil>}
	I0416 00:50:14.839456   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:14.839364   57541 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:8c:05:70} reservation:<nil>}
	I0416 00:50:14.840234   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:14.840143   57541 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:a2:7e:7e} reservation:<nil>}
	I0416 00:50:14.842389   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:14.842255   57541 network.go:209] skipping subnet 192.168.72.0/24 that is reserved: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0416 00:50:14.843540   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:14.843452   57541 network.go:206] using free private subnet 192.168.83.0/24: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112c30}
	I0416 00:50:14.843594   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | created network xml: 
	I0416 00:50:14.843622   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | <network>
	I0416 00:50:14.843637   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG |   <name>mk-old-k8s-version-800769</name>
	I0416 00:50:14.843660   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG |   <dns enable='no'/>
	I0416 00:50:14.843687   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG |   
	I0416 00:50:14.843711   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG |   <ip address='192.168.83.1' netmask='255.255.255.0'>
	I0416 00:50:14.843724   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG |     <dhcp>
	I0416 00:50:14.843737   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG |       <range start='192.168.83.2' end='192.168.83.253'/>
	I0416 00:50:14.843761   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG |     </dhcp>
	I0416 00:50:14.843770   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG |   </ip>
	I0416 00:50:14.843779   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG |   
	I0416 00:50:14.843794   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | </network>
	I0416 00:50:14.843823   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | 
	I0416 00:50:14.848837   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | trying to create private KVM network mk-old-k8s-version-800769 192.168.83.0/24...
	I0416 00:50:14.915063   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | private KVM network mk-old-k8s-version-800769 192.168.83.0/24 created
	I0416 00:50:14.915099   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:14.915037   57541 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:50:14.915112   57518 main.go:141] libmachine: (old-k8s-version-800769) Setting up store path in /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769 ...
	I0416 00:50:14.915129   57518 main.go:141] libmachine: (old-k8s-version-800769) Building disk image from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso
	I0416 00:50:14.915188   57518 main.go:141] libmachine: (old-k8s-version-800769) Downloading /home/jenkins/minikube-integration/18647-7542/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0416 00:50:15.135863   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:15.135715   57541 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa...
	I0416 00:50:15.308682   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:15.308573   57541 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/old-k8s-version-800769.rawdisk...
	I0416 00:50:15.308709   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Writing magic tar header
	I0416 00:50:15.308722   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Writing SSH key tar header
	I0416 00:50:15.308730   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:15.308688   57541 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769 ...
	I0416 00:50:15.308812   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769
	I0416 00:50:15.308832   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines
	I0416 00:50:15.308841   57518 main.go:141] libmachine: (old-k8s-version-800769) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769 (perms=drwx------)
	I0416 00:50:15.308850   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:50:15.308860   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542
	I0416 00:50:15.308868   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 00:50:15.308877   57518 main.go:141] libmachine: (old-k8s-version-800769) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines (perms=drwxr-xr-x)
	I0416 00:50:15.308888   57518 main.go:141] libmachine: (old-k8s-version-800769) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube (perms=drwxr-xr-x)
	I0416 00:50:15.308897   57518 main.go:141] libmachine: (old-k8s-version-800769) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542 (perms=drwxrwxr-x)
	I0416 00:50:15.308909   57518 main.go:141] libmachine: (old-k8s-version-800769) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 00:50:15.308916   57518 main.go:141] libmachine: (old-k8s-version-800769) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 00:50:15.308922   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Checking permissions on dir: /home/jenkins
	I0416 00:50:15.308930   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Checking permissions on dir: /home
	I0416 00:50:15.308936   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Skipping /home - not owner
	I0416 00:50:15.308945   57518 main.go:141] libmachine: (old-k8s-version-800769) Creating domain...
	I0416 00:50:15.310097   57518 main.go:141] libmachine: (old-k8s-version-800769) define libvirt domain using xml: 
	I0416 00:50:15.310116   57518 main.go:141] libmachine: (old-k8s-version-800769) <domain type='kvm'>
	I0416 00:50:15.310124   57518 main.go:141] libmachine: (old-k8s-version-800769)   <name>old-k8s-version-800769</name>
	I0416 00:50:15.310129   57518 main.go:141] libmachine: (old-k8s-version-800769)   <memory unit='MiB'>2200</memory>
	I0416 00:50:15.310135   57518 main.go:141] libmachine: (old-k8s-version-800769)   <vcpu>2</vcpu>
	I0416 00:50:15.310155   57518 main.go:141] libmachine: (old-k8s-version-800769)   <features>
	I0416 00:50:15.310169   57518 main.go:141] libmachine: (old-k8s-version-800769)     <acpi/>
	I0416 00:50:15.310176   57518 main.go:141] libmachine: (old-k8s-version-800769)     <apic/>
	I0416 00:50:15.310183   57518 main.go:141] libmachine: (old-k8s-version-800769)     <pae/>
	I0416 00:50:15.310188   57518 main.go:141] libmachine: (old-k8s-version-800769)     
	I0416 00:50:15.310194   57518 main.go:141] libmachine: (old-k8s-version-800769)   </features>
	I0416 00:50:15.310202   57518 main.go:141] libmachine: (old-k8s-version-800769)   <cpu mode='host-passthrough'>
	I0416 00:50:15.310207   57518 main.go:141] libmachine: (old-k8s-version-800769)   
	I0416 00:50:15.310215   57518 main.go:141] libmachine: (old-k8s-version-800769)   </cpu>
	I0416 00:50:15.310236   57518 main.go:141] libmachine: (old-k8s-version-800769)   <os>
	I0416 00:50:15.310256   57518 main.go:141] libmachine: (old-k8s-version-800769)     <type>hvm</type>
	I0416 00:50:15.310263   57518 main.go:141] libmachine: (old-k8s-version-800769)     <boot dev='cdrom'/>
	I0416 00:50:15.310269   57518 main.go:141] libmachine: (old-k8s-version-800769)     <boot dev='hd'/>
	I0416 00:50:15.310278   57518 main.go:141] libmachine: (old-k8s-version-800769)     <bootmenu enable='no'/>
	I0416 00:50:15.310283   57518 main.go:141] libmachine: (old-k8s-version-800769)   </os>
	I0416 00:50:15.310291   57518 main.go:141] libmachine: (old-k8s-version-800769)   <devices>
	I0416 00:50:15.310297   57518 main.go:141] libmachine: (old-k8s-version-800769)     <disk type='file' device='cdrom'>
	I0416 00:50:15.310308   57518 main.go:141] libmachine: (old-k8s-version-800769)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/boot2docker.iso'/>
	I0416 00:50:15.310313   57518 main.go:141] libmachine: (old-k8s-version-800769)       <target dev='hdc' bus='scsi'/>
	I0416 00:50:15.310319   57518 main.go:141] libmachine: (old-k8s-version-800769)       <readonly/>
	I0416 00:50:15.310327   57518 main.go:141] libmachine: (old-k8s-version-800769)     </disk>
	I0416 00:50:15.310337   57518 main.go:141] libmachine: (old-k8s-version-800769)     <disk type='file' device='disk'>
	I0416 00:50:15.310351   57518 main.go:141] libmachine: (old-k8s-version-800769)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 00:50:15.310368   57518 main.go:141] libmachine: (old-k8s-version-800769)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/old-k8s-version-800769.rawdisk'/>
	I0416 00:50:15.310380   57518 main.go:141] libmachine: (old-k8s-version-800769)       <target dev='hda' bus='virtio'/>
	I0416 00:50:15.310389   57518 main.go:141] libmachine: (old-k8s-version-800769)     </disk>
	I0416 00:50:15.310401   57518 main.go:141] libmachine: (old-k8s-version-800769)     <interface type='network'>
	I0416 00:50:15.310415   57518 main.go:141] libmachine: (old-k8s-version-800769)       <source network='mk-old-k8s-version-800769'/>
	I0416 00:50:15.310424   57518 main.go:141] libmachine: (old-k8s-version-800769)       <model type='virtio'/>
	I0416 00:50:15.310456   57518 main.go:141] libmachine: (old-k8s-version-800769)     </interface>
	I0416 00:50:15.310476   57518 main.go:141] libmachine: (old-k8s-version-800769)     <interface type='network'>
	I0416 00:50:15.310488   57518 main.go:141] libmachine: (old-k8s-version-800769)       <source network='default'/>
	I0416 00:50:15.310500   57518 main.go:141] libmachine: (old-k8s-version-800769)       <model type='virtio'/>
	I0416 00:50:15.310511   57518 main.go:141] libmachine: (old-k8s-version-800769)     </interface>
	I0416 00:50:15.310527   57518 main.go:141] libmachine: (old-k8s-version-800769)     <serial type='pty'>
	I0416 00:50:15.310541   57518 main.go:141] libmachine: (old-k8s-version-800769)       <target port='0'/>
	I0416 00:50:15.310553   57518 main.go:141] libmachine: (old-k8s-version-800769)     </serial>
	I0416 00:50:15.310567   57518 main.go:141] libmachine: (old-k8s-version-800769)     <console type='pty'>
	I0416 00:50:15.310579   57518 main.go:141] libmachine: (old-k8s-version-800769)       <target type='serial' port='0'/>
	I0416 00:50:15.310592   57518 main.go:141] libmachine: (old-k8s-version-800769)     </console>
	I0416 00:50:15.310608   57518 main.go:141] libmachine: (old-k8s-version-800769)     <rng model='virtio'>
	I0416 00:50:15.310624   57518 main.go:141] libmachine: (old-k8s-version-800769)       <backend model='random'>/dev/random</backend>
	I0416 00:50:15.310636   57518 main.go:141] libmachine: (old-k8s-version-800769)     </rng>
	I0416 00:50:15.310649   57518 main.go:141] libmachine: (old-k8s-version-800769)     
	I0416 00:50:15.310660   57518 main.go:141] libmachine: (old-k8s-version-800769)     
	I0416 00:50:15.310670   57518 main.go:141] libmachine: (old-k8s-version-800769)   </devices>
	I0416 00:50:15.310685   57518 main.go:141] libmachine: (old-k8s-version-800769) </domain>
	I0416 00:50:15.310699   57518 main.go:141] libmachine: (old-k8s-version-800769) 
	I0416 00:50:15.314754   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a0:db:07 in network default
	I0416 00:50:15.315363   57518 main.go:141] libmachine: (old-k8s-version-800769) Ensuring networks are active...
	I0416 00:50:15.315390   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:15.316041   57518 main.go:141] libmachine: (old-k8s-version-800769) Ensuring network default is active
	I0416 00:50:15.316433   57518 main.go:141] libmachine: (old-k8s-version-800769) Ensuring network mk-old-k8s-version-800769 is active
	I0416 00:50:15.317088   57518 main.go:141] libmachine: (old-k8s-version-800769) Getting domain xml...
	I0416 00:50:15.317857   57518 main.go:141] libmachine: (old-k8s-version-800769) Creating domain...
	I0416 00:50:16.617402   57518 main.go:141] libmachine: (old-k8s-version-800769) Waiting to get IP...
	I0416 00:50:16.618172   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:16.618730   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:16.618788   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:16.618717   57541 retry.go:31] will retry after 245.38194ms: waiting for machine to come up
	I0416 00:50:16.866167   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:16.866699   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:16.866726   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:16.866659   57541 retry.go:31] will retry after 276.679462ms: waiting for machine to come up
	I0416 00:50:17.145220   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:17.145785   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:17.145817   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:17.145732   57541 retry.go:31] will retry after 462.312932ms: waiting for machine to come up
	I0416 00:50:17.609346   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:17.609922   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:17.609951   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:17.609880   57541 retry.go:31] will retry after 444.438479ms: waiting for machine to come up
	I0416 00:50:18.055580   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:18.056026   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:18.056054   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:18.056000   57541 retry.go:31] will retry after 651.351774ms: waiting for machine to come up
	I0416 00:50:18.708903   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:18.709388   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:18.709421   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:18.709351   57541 retry.go:31] will retry after 639.944351ms: waiting for machine to come up
	I0416 00:50:19.350987   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:19.351503   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:19.351529   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:19.351457   57541 retry.go:31] will retry after 715.915795ms: waiting for machine to come up
	I0416 00:50:20.069287   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:20.069873   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:20.069901   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:20.069854   57541 retry.go:31] will retry after 1.140237119s: waiting for machine to come up
	I0416 00:50:21.211964   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:21.212558   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:21.212586   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:21.212505   57541 retry.go:31] will retry after 1.276242671s: waiting for machine to come up
	I0416 00:50:22.490513   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:22.491047   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:22.491076   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:22.491003   57541 retry.go:31] will retry after 1.525621794s: waiting for machine to come up
	I0416 00:50:24.018790   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:24.019320   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:24.019352   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:24.019275   57541 retry.go:31] will retry after 2.192801459s: waiting for machine to come up
	I0416 00:50:26.214035   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:26.214705   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:26.214737   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:26.214663   57541 retry.go:31] will retry after 3.606824053s: waiting for machine to come up
	I0416 00:50:29.825017   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:29.825506   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:29.825539   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:29.825461   57541 retry.go:31] will retry after 3.080343871s: waiting for machine to come up
	I0416 00:50:32.909816   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:32.910470   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:50:32.910494   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:50:32.910430   57541 retry.go:31] will retry after 3.609189483s: waiting for machine to come up
	I0416 00:50:36.521773   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:36.522355   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has current primary IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:36.522392   57518 main.go:141] libmachine: (old-k8s-version-800769) Found IP for machine: 192.168.83.98
	I0416 00:50:36.522406   57518 main.go:141] libmachine: (old-k8s-version-800769) Reserving static IP address...
	I0416 00:50:36.522808   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-800769", mac: "52:54:00:a1:ad:da", ip: "192.168.83.98"} in network mk-old-k8s-version-800769
	I0416 00:50:36.600263   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Getting to WaitForSSH function...
	I0416 00:50:36.600296   57518 main.go:141] libmachine: (old-k8s-version-800769) Reserved static IP address: 192.168.83.98
	I0416 00:50:36.600311   57518 main.go:141] libmachine: (old-k8s-version-800769) Waiting for SSH to be available...
	I0416 00:50:36.603616   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:36.604147   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:50:30 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a1:ad:da}
	I0416 00:50:36.604174   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:36.604366   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Using SSH client type: external
	I0416 00:50:36.604392   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa (-rw-------)
	I0416 00:50:36.604420   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.98 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 00:50:36.604447   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | About to run SSH command:
	I0416 00:50:36.604475   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | exit 0
	I0416 00:50:36.733728   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | SSH cmd err, output: <nil>: 
	I0416 00:50:36.733979   57518 main.go:141] libmachine: (old-k8s-version-800769) KVM machine creation complete!
	I0416 00:50:36.734318   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetConfigRaw
	I0416 00:50:36.734927   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:50:36.735159   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:50:36.735323   57518 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0416 00:50:36.735337   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetState
	I0416 00:50:36.736816   57518 main.go:141] libmachine: Detecting operating system of created instance...
	I0416 00:50:36.736833   57518 main.go:141] libmachine: Waiting for SSH to be available...
	I0416 00:50:36.736841   57518 main.go:141] libmachine: Getting to WaitForSSH function...
	I0416 00:50:36.736848   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:50:36.739586   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:36.739924   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:50:30 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:50:36.739960   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:36.740072   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:50:36.740252   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:50:36.740422   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:50:36.740589   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:50:36.740761   57518 main.go:141] libmachine: Using SSH client type: native
	I0416 00:50:36.740976   57518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:50:36.740988   57518 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0416 00:50:36.857473   57518 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:50:36.857510   57518 main.go:141] libmachine: Detecting the provisioner...
	I0416 00:50:36.857521   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:50:36.863977   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:36.864338   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:50:30 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:50:36.864434   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:36.864508   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:50:36.864706   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:50:36.864877   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:50:36.865022   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:50:36.865210   57518 main.go:141] libmachine: Using SSH client type: native
	I0416 00:50:36.865392   57518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:50:36.865404   57518 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0416 00:50:36.986081   57518 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0416 00:50:36.986137   57518 main.go:141] libmachine: found compatible host: buildroot
	I0416 00:50:36.986144   57518 main.go:141] libmachine: Provisioning with buildroot...
	I0416 00:50:36.986151   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:50:36.986386   57518 buildroot.go:166] provisioning hostname "old-k8s-version-800769"
	I0416 00:50:36.986417   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:50:36.986613   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:50:36.989366   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:36.989859   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:50:30 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:50:36.989894   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:36.989984   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:50:36.990191   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:50:36.990383   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:50:36.990588   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:50:36.990803   57518 main.go:141] libmachine: Using SSH client type: native
	I0416 00:50:36.991032   57518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:50:36.991050   57518 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-800769 && echo "old-k8s-version-800769" | sudo tee /etc/hostname
	I0416 00:50:37.124891   57518 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-800769
	
	I0416 00:50:37.124944   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:50:37.533517   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:37.533935   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:50:30 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:50:37.533965   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:37.534246   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:50:37.534491   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:50:37.534695   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:50:37.534872   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:50:37.535049   57518 main.go:141] libmachine: Using SSH client type: native
	I0416 00:50:37.535222   57518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:50:37.535238   57518 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-800769' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-800769/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-800769' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:50:37.668083   57518 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:50:37.668119   57518 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:50:37.668144   57518 buildroot.go:174] setting up certificates
	I0416 00:50:37.668158   57518 provision.go:84] configureAuth start
	I0416 00:50:37.668169   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:50:37.668416   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:50:37.671146   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:37.671488   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:50:30 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:50:37.671513   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:37.671695   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:50:37.674056   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:37.674360   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:50:30 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:50:37.674390   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:37.674552   57518 provision.go:143] copyHostCerts
	I0416 00:50:37.674618   57518 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:50:37.674639   57518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:50:37.674714   57518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:50:37.674818   57518 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:50:37.674827   57518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:50:37.674855   57518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:50:37.674916   57518 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:50:37.674923   57518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:50:37.674944   57518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:50:37.675006   57518 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-800769 san=[127.0.0.1 192.168.83.98 localhost minikube old-k8s-version-800769]
	I0416 00:50:37.761500   57518 provision.go:177] copyRemoteCerts
	I0416 00:50:37.761549   57518 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:50:37.761570   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:50:37.764418   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:37.764815   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:50:30 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:50:37.764863   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:37.765244   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:50:37.765441   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:50:37.765652   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:50:37.765820   57518 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:50:37.859684   57518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:50:37.885008   57518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0416 00:50:37.912639   57518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 00:50:37.939479   57518 provision.go:87] duration metric: took 271.310199ms to configureAuth
	I0416 00:50:37.939505   57518 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:50:37.939694   57518 config.go:182] Loaded profile config "old-k8s-version-800769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0416 00:50:37.939776   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:50:37.942573   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:37.942962   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:50:30 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:50:37.942998   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:37.943180   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:50:37.943378   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:50:37.943565   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:50:37.943694   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:50:37.943860   57518 main.go:141] libmachine: Using SSH client type: native
	I0416 00:50:37.944016   57518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:50:37.944032   57518 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:50:38.248545   57518 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:50:38.248570   57518 main.go:141] libmachine: Checking connection to Docker...
	I0416 00:50:38.248580   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetURL
	I0416 00:50:38.249965   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | Using libvirt version 6000000
	I0416 00:50:38.252812   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:38.253188   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:50:30 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:50:38.253216   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:38.253397   57518 main.go:141] libmachine: Docker is up and running!
	I0416 00:50:38.253412   57518 main.go:141] libmachine: Reticulating splines...
	I0416 00:50:38.253420   57518 client.go:171] duration metric: took 23.418664074s to LocalClient.Create
	I0416 00:50:38.253441   57518 start.go:167] duration metric: took 23.418722474s to libmachine.API.Create "old-k8s-version-800769"
	I0416 00:50:38.253453   57518 start.go:293] postStartSetup for "old-k8s-version-800769" (driver="kvm2")
	I0416 00:50:38.253465   57518 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:50:38.253485   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:50:38.253688   57518 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:50:38.253710   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:50:38.255706   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:38.256006   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:50:30 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:50:38.256025   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:38.256160   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:50:38.256336   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:50:38.256496   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:50:38.256644   57518 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:50:38.349581   57518 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:50:38.354233   57518 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:50:38.354253   57518 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:50:38.354324   57518 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:50:38.354442   57518 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:50:38.354558   57518 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:50:38.365958   57518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:50:38.391649   57518 start.go:296] duration metric: took 138.183727ms for postStartSetup
	I0416 00:50:38.391700   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetConfigRaw
	I0416 00:50:38.392352   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:50:38.395003   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:38.395352   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:50:30 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:50:38.395390   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:38.395651   57518 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/config.json ...
	I0416 00:50:38.395871   57518 start.go:128] duration metric: took 23.579899593s to createHost
	I0416 00:50:38.395899   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:50:38.398267   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:38.398729   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:50:30 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:50:38.398750   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:38.398915   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:50:38.399112   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:50:38.399307   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:50:38.399449   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:50:38.399625   57518 main.go:141] libmachine: Using SSH client type: native
	I0416 00:50:38.399812   57518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:50:38.399824   57518 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0416 00:50:38.514068   57518 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713228638.475939923
	
	I0416 00:50:38.514091   57518 fix.go:216] guest clock: 1713228638.475939923
	I0416 00:50:38.514100   57518 fix.go:229] Guest: 2024-04-16 00:50:38.475939923 +0000 UTC Remote: 2024-04-16 00:50:38.395885368 +0000 UTC m=+23.704181515 (delta=80.054555ms)
	I0416 00:50:38.514122   57518 fix.go:200] guest clock delta is within tolerance: 80.054555ms
	I0416 00:50:38.514128   57518 start.go:83] releasing machines lock for "old-k8s-version-800769", held for 23.698264667s
	I0416 00:50:38.514158   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:50:38.514465   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:50:38.517151   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:38.517538   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:50:30 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:50:38.517561   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:38.517759   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:50:38.518184   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:50:38.518366   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:50:38.518460   57518 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:50:38.518496   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:50:38.518578   57518 ssh_runner.go:195] Run: cat /version.json
	I0416 00:50:38.518595   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:50:38.521398   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:38.521629   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:38.521740   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:50:30 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:50:38.521763   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:38.521938   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:50:38.522039   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:50:30 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:50:38.522064   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:38.522092   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:50:38.522211   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:50:38.522262   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:50:38.522382   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:50:38.522426   57518 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:50:38.522521   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:50:38.522637   57518 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:50:38.644730   57518 ssh_runner.go:195] Run: systemctl --version
	I0416 00:50:38.653791   57518 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:50:38.813040   57518 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 00:50:38.820539   57518 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:50:38.820596   57518 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:50:38.837858   57518 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 00:50:38.837878   57518 start.go:494] detecting cgroup driver to use...
	I0416 00:50:38.837942   57518 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:50:38.857063   57518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:50:38.871590   57518 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:50:38.871651   57518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:50:38.885523   57518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:50:38.899251   57518 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:50:39.038525   57518 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 00:50:39.188520   57518 docker.go:233] disabling docker service ...
	I0416 00:50:39.188599   57518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 00:50:39.204403   57518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 00:50:39.218651   57518 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 00:50:39.360941   57518 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 00:50:39.484295   57518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 00:50:39.499266   57518 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 00:50:39.518776   57518 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0416 00:50:39.518855   57518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:50:39.530420   57518 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 00:50:39.530490   57518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:50:39.542307   57518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:50:39.556149   57518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:50:39.567354   57518 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 00:50:39.580456   57518 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 00:50:39.591930   57518 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 00:50:39.591978   57518 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 00:50:39.608453   57518 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 00:50:39.619913   57518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:50:39.762707   57518 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 00:50:39.932449   57518 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 00:50:39.932537   57518 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 00:50:39.937614   57518 start.go:562] Will wait 60s for crictl version
	I0416 00:50:39.937672   57518 ssh_runner.go:195] Run: which crictl
	I0416 00:50:39.941721   57518 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 00:50:39.979406   57518 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 00:50:39.979505   57518 ssh_runner.go:195] Run: crio --version
	I0416 00:50:40.016207   57518 ssh_runner.go:195] Run: crio --version
	I0416 00:50:40.049242   57518 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0416 00:50:40.050503   57518 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:50:40.053317   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:40.053665   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:50:30 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:50:40.053693   57518 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:50:40.053885   57518 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0416 00:50:40.059572   57518 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:50:40.077817   57518 kubeadm.go:877] updating cluster {Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.98 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 00:50:40.077946   57518 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 00:50:40.077993   57518 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:50:40.112737   57518 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 00:50:40.112852   57518 ssh_runner.go:195] Run: which lz4
	I0416 00:50:40.119292   57518 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0416 00:50:40.126098   57518 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 00:50:40.126134   57518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0416 00:50:41.951076   57518 crio.go:462] duration metric: took 1.831812475s to copy over tarball
	I0416 00:50:41.951156   57518 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 00:50:44.628845   57518 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.677607548s)
	I0416 00:50:44.628891   57518 crio.go:469] duration metric: took 2.67778437s to extract the tarball
	I0416 00:50:44.628902   57518 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 00:50:44.670960   57518 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:50:44.719857   57518 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 00:50:44.719880   57518 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 00:50:44.719934   57518 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:50:44.719961   57518 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 00:50:44.719971   57518 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 00:50:44.719990   57518 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0416 00:50:44.720016   57518 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0416 00:50:44.719974   57518 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0416 00:50:44.720109   57518 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 00:50:44.720231   57518 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 00:50:44.721516   57518 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 00:50:44.721611   57518 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0416 00:50:44.721723   57518 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 00:50:44.721825   57518 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0416 00:50:44.721948   57518 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 00:50:44.722088   57518 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0416 00:50:44.722624   57518 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 00:50:44.722793   57518 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:50:44.915941   57518 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0416 00:50:44.920591   57518 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0416 00:50:44.925481   57518 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0416 00:50:44.933916   57518 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0416 00:50:44.956127   57518 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 00:50:44.956648   57518 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0416 00:50:44.988848   57518 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0416 00:50:44.998379   57518 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0416 00:50:44.998422   57518 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0416 00:50:44.998468   57518 ssh_runner.go:195] Run: which crictl
	I0416 00:50:45.077356   57518 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0416 00:50:45.077399   57518 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 00:50:45.077438   57518 ssh_runner.go:195] Run: which crictl
	I0416 00:50:45.104861   57518 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0416 00:50:45.104907   57518 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 00:50:45.104958   57518 ssh_runner.go:195] Run: which crictl
	I0416 00:50:45.108866   57518 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0416 00:50:45.108910   57518 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 00:50:45.108961   57518 ssh_runner.go:195] Run: which crictl
	I0416 00:50:45.126401   57518 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0416 00:50:45.126439   57518 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0416 00:50:45.126452   57518 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 00:50:45.126470   57518 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0416 00:50:45.126502   57518 ssh_runner.go:195] Run: which crictl
	I0416 00:50:45.126508   57518 ssh_runner.go:195] Run: which crictl
	I0416 00:50:45.138458   57518 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0416 00:50:45.138499   57518 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0416 00:50:45.138530   57518 ssh_runner.go:195] Run: which crictl
	I0416 00:50:45.138538   57518 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0416 00:50:45.138612   57518 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0416 00:50:45.138653   57518 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0416 00:50:45.138739   57518 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0416 00:50:45.138820   57518 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0416 00:50:45.138856   57518 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 00:50:45.229445   57518 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0416 00:50:45.229834   57518 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0416 00:50:45.294896   57518 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0416 00:50:45.294919   57518 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0416 00:50:45.294987   57518 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0416 00:50:45.295024   57518 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0416 00:50:45.295101   57518 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0416 00:50:45.308884   57518 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0416 00:50:45.539557   57518 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:50:45.689006   57518 cache_images.go:92] duration metric: took 969.109018ms to LoadCachedImages
	W0416 00:50:45.689094   57518 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0416 00:50:45.689111   57518 kubeadm.go:928] updating node { 192.168.83.98 8443 v1.20.0 crio true true} ...
	I0416 00:50:45.689260   57518 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-800769 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.98
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 00:50:45.689353   57518 ssh_runner.go:195] Run: crio config
	I0416 00:50:45.742458   57518 cni.go:84] Creating CNI manager for ""
	I0416 00:50:45.742481   57518 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:50:45.742489   57518 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 00:50:45.742506   57518 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.98 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-800769 NodeName:old-k8s-version-800769 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.98"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.98 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0416 00:50:45.742647   57518 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.98
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-800769"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.98
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.98"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 00:50:45.742717   57518 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0416 00:50:45.754416   57518 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 00:50:45.754539   57518 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 00:50:45.766037   57518 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0416 00:50:45.785952   57518 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 00:50:45.805087   57518 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0416 00:50:45.824517   57518 ssh_runner.go:195] Run: grep 192.168.83.98	control-plane.minikube.internal$ /etc/hosts
	I0416 00:50:45.828864   57518 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.98	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:50:45.843314   57518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:50:45.966292   57518 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 00:50:45.989009   57518 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769 for IP: 192.168.83.98
	I0416 00:50:45.989033   57518 certs.go:194] generating shared ca certs ...
	I0416 00:50:45.989053   57518 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:50:45.989227   57518 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 00:50:45.989293   57518 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 00:50:45.989315   57518 certs.go:256] generating profile certs ...
	I0416 00:50:45.989406   57518 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.key
	I0416 00:50:45.989425   57518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.crt with IP's: []
	I0416 00:50:46.408585   57518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.crt ...
	I0416 00:50:46.408617   57518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.crt: {Name:mkbf676d8c3ddc7b08887a91226d9a5b8803a15b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:50:46.408774   57518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.key ...
	I0416 00:50:46.408788   57518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.key: {Name:mk9591e9503c1363e321a05f9763382597a7b2a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:50:46.408858   57518 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key.efc35655
	I0416 00:50:46.408875   57518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.crt.efc35655 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.83.98]
	I0416 00:50:46.476105   57518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.crt.efc35655 ...
	I0416 00:50:46.476137   57518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.crt.efc35655: {Name:mk51f94e119f1853fcd97c9c998d1ac92cf56c3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:50:46.476296   57518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key.efc35655 ...
	I0416 00:50:46.476313   57518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key.efc35655: {Name:mk16afa9da68a71ff2bc6215134e749647d05aa6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:50:46.476390   57518 certs.go:381] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.crt.efc35655 -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.crt
	I0416 00:50:46.476504   57518 certs.go:385] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key.efc35655 -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key
	I0416 00:50:46.476561   57518 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.key
	I0416 00:50:46.476577   57518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.crt with IP's: []
	I0416 00:50:46.579628   57518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.crt ...
	I0416 00:50:46.579659   57518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.crt: {Name:mk36ff5ae542f3f52e9086043c7eb20368769717 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:50:46.579839   57518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.key ...
	I0416 00:50:46.579858   57518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.key: {Name:mk35fd4ce01a10979b5964145582219381db20b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:50:46.580015   57518 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 00:50:46.580050   57518 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 00:50:46.580058   57518 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 00:50:46.580081   57518 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 00:50:46.580103   57518 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 00:50:46.580127   57518 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 00:50:46.580166   57518 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:50:46.580851   57518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 00:50:46.609425   57518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 00:50:46.635400   57518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 00:50:46.661824   57518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 00:50:46.689615   57518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0416 00:50:46.716663   57518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 00:50:46.744772   57518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 00:50:46.770804   57518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 00:50:46.802804   57518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 00:50:46.839736   57518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 00:50:46.874757   57518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 00:50:46.910367   57518 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 00:50:46.928703   57518 ssh_runner.go:195] Run: openssl version
	I0416 00:50:46.935429   57518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 00:50:46.948464   57518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 00:50:46.954456   57518 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 00:50:46.954522   57518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 00:50:46.960795   57518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 00:50:46.972975   57518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 00:50:46.985038   57518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 00:50:46.990576   57518 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 00:50:46.990645   57518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 00:50:46.997729   57518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 00:50:47.010405   57518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 00:50:47.022045   57518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:50:47.027031   57518 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:50:47.027089   57518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:50:47.033173   57518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 00:50:47.044402   57518 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 00:50:47.048979   57518 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 00:50:47.049041   57518 kubeadm.go:391] StartCluster: {Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.98 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:50:47.049105   57518 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 00:50:47.049145   57518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:50:47.090049   57518 cri.go:89] found id: ""
	I0416 00:50:47.090109   57518 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 00:50:47.100864   57518 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 00:50:47.111047   57518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 00:50:47.121411   57518 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 00:50:47.121437   57518 kubeadm.go:156] found existing configuration files:
	
	I0416 00:50:47.121508   57518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 00:50:47.131033   57518 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 00:50:47.131097   57518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 00:50:47.141545   57518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 00:50:47.151231   57518 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 00:50:47.151293   57518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 00:50:47.161764   57518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 00:50:47.172124   57518 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 00:50:47.172190   57518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 00:50:47.182283   57518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 00:50:47.192217   57518 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 00:50:47.192282   57518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 00:50:47.202015   57518 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 00:50:47.317969   57518 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 00:50:47.318060   57518 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 00:50:47.475619   57518 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 00:50:47.475746   57518 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 00:50:47.475894   57518 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 00:50:47.670930   57518 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 00:50:47.673739   57518 out.go:204]   - Generating certificates and keys ...
	I0416 00:50:47.673880   57518 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 00:50:47.673996   57518 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 00:50:47.747292   57518 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 00:50:47.931730   57518 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 00:50:48.040395   57518 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 00:50:48.372638   57518 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 00:50:48.515744   57518 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 00:50:48.516005   57518 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-800769] and IPs [192.168.83.98 127.0.0.1 ::1]
	I0416 00:50:48.584821   57518 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 00:50:48.585196   57518 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-800769] and IPs [192.168.83.98 127.0.0.1 ::1]
	I0416 00:50:48.713197   57518 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 00:50:48.981933   57518 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 00:50:49.179574   57518 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 00:50:49.179835   57518 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 00:50:49.615009   57518 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 00:50:49.788794   57518 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 00:50:49.912864   57518 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 00:50:50.137771   57518 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 00:50:50.156931   57518 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 00:50:50.158435   57518 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 00:50:50.158497   57518 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 00:50:50.302670   57518 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 00:50:50.304630   57518 out.go:204]   - Booting up control plane ...
	I0416 00:50:50.304770   57518 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 00:50:50.316404   57518 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 00:50:50.317255   57518 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 00:50:50.318261   57518 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 00:50:50.324142   57518 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 00:51:30.310285   57518 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 00:51:30.310384   57518 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:51:30.310573   57518 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:51:35.310173   57518 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:51:35.310362   57518 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:51:45.309654   57518 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:51:45.309909   57518 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:52:05.309822   57518 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:52:05.310111   57518 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:52:45.311106   57518 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:52:45.311704   57518 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:52:45.311730   57518 kubeadm.go:309] 
	I0416 00:52:45.311846   57518 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 00:52:45.311944   57518 kubeadm.go:309] 		timed out waiting for the condition
	I0416 00:52:45.311954   57518 kubeadm.go:309] 
	I0416 00:52:45.312032   57518 kubeadm.go:309] 	This error is likely caused by:
	I0416 00:52:45.312127   57518 kubeadm.go:309] 		- The kubelet is not running
	I0416 00:52:45.312360   57518 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 00:52:45.312371   57518 kubeadm.go:309] 
	I0416 00:52:45.312614   57518 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 00:52:45.312696   57518 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 00:52:45.312770   57518 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 00:52:45.312779   57518 kubeadm.go:309] 
	I0416 00:52:45.313048   57518 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 00:52:45.313291   57518 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 00:52:45.313315   57518 kubeadm.go:309] 
	I0416 00:52:45.313512   57518 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 00:52:45.313662   57518 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 00:52:45.313856   57518 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 00:52:45.314033   57518 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 00:52:45.314051   57518 kubeadm.go:309] 
	I0416 00:52:45.314244   57518 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 00:52:45.314580   57518 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 00:52:45.314722   57518 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0416 00:52:45.315122   57518 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-800769] and IPs [192.168.83.98 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-800769] and IPs [192.168.83.98 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-800769] and IPs [192.168.83.98 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-800769] and IPs [192.168.83.98 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0416 00:52:45.315189   57518 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 00:52:46.474891   57518 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.159681728s)
	I0416 00:52:46.474951   57518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:52:46.489446   57518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 00:52:46.500558   57518 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 00:52:46.500577   57518 kubeadm.go:156] found existing configuration files:
	
	I0416 00:52:46.500625   57518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 00:52:46.510206   57518 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 00:52:46.510250   57518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 00:52:46.519942   57518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 00:52:46.529541   57518 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 00:52:46.529592   57518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 00:52:46.539459   57518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 00:52:46.548955   57518 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 00:52:46.549008   57518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 00:52:46.558758   57518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 00:52:46.568041   57518 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 00:52:46.568093   57518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 00:52:46.577904   57518 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 00:52:46.797114   57518 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 00:54:43.007831   57518 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 00:54:43.007920   57518 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0416 00:54:43.009467   57518 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 00:54:43.009534   57518 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 00:54:43.009613   57518 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 00:54:43.009724   57518 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 00:54:43.009814   57518 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 00:54:43.009891   57518 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 00:54:43.012089   57518 out.go:204]   - Generating certificates and keys ...
	I0416 00:54:43.012149   57518 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 00:54:43.012204   57518 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 00:54:43.012291   57518 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 00:54:43.012382   57518 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 00:54:43.012540   57518 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 00:54:43.012619   57518 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 00:54:43.012705   57518 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 00:54:43.012804   57518 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 00:54:43.012879   57518 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 00:54:43.012970   57518 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 00:54:43.013009   57518 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 00:54:43.013090   57518 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 00:54:43.013152   57518 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 00:54:43.013246   57518 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 00:54:43.013342   57518 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 00:54:43.013428   57518 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 00:54:43.013532   57518 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 00:54:43.013605   57518 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 00:54:43.013652   57518 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 00:54:43.013738   57518 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 00:54:43.015440   57518 out.go:204]   - Booting up control plane ...
	I0416 00:54:43.015545   57518 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 00:54:43.015613   57518 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 00:54:43.015699   57518 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 00:54:43.015802   57518 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 00:54:43.016005   57518 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 00:54:43.016055   57518 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 00:54:43.016116   57518 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:54:43.016277   57518 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:54:43.016336   57518 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:54:43.016511   57518 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:54:43.016570   57518 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:54:43.016727   57518 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:54:43.016788   57518 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:54:43.016988   57518 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:54:43.017053   57518 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 00:54:43.017247   57518 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 00:54:43.017258   57518 kubeadm.go:309] 
	I0416 00:54:43.017295   57518 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 00:54:43.017341   57518 kubeadm.go:309] 		timed out waiting for the condition
	I0416 00:54:43.017348   57518 kubeadm.go:309] 
	I0416 00:54:43.017376   57518 kubeadm.go:309] 	This error is likely caused by:
	I0416 00:54:43.017405   57518 kubeadm.go:309] 		- The kubelet is not running
	I0416 00:54:43.017495   57518 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 00:54:43.017504   57518 kubeadm.go:309] 
	I0416 00:54:43.017622   57518 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 00:54:43.017674   57518 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 00:54:43.017716   57518 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 00:54:43.017725   57518 kubeadm.go:309] 
	I0416 00:54:43.017873   57518 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 00:54:43.017991   57518 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 00:54:43.018001   57518 kubeadm.go:309] 
	I0416 00:54:43.018098   57518 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 00:54:43.018192   57518 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 00:54:43.018294   57518 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 00:54:43.018390   57518 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 00:54:43.018413   57518 kubeadm.go:309] 
	I0416 00:54:43.018456   57518 kubeadm.go:393] duration metric: took 3m55.969423615s to StartCluster
	I0416 00:54:43.018494   57518 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 00:54:43.018559   57518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 00:54:43.061584   57518 cri.go:89] found id: ""
	I0416 00:54:43.061613   57518 logs.go:276] 0 containers: []
	W0416 00:54:43.061624   57518 logs.go:278] No container was found matching "kube-apiserver"
	I0416 00:54:43.061631   57518 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 00:54:43.061693   57518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 00:54:43.097755   57518 cri.go:89] found id: ""
	I0416 00:54:43.097776   57518 logs.go:276] 0 containers: []
	W0416 00:54:43.097783   57518 logs.go:278] No container was found matching "etcd"
	I0416 00:54:43.097789   57518 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 00:54:43.097842   57518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 00:54:43.133910   57518 cri.go:89] found id: ""
	I0416 00:54:43.133931   57518 logs.go:276] 0 containers: []
	W0416 00:54:43.133938   57518 logs.go:278] No container was found matching "coredns"
	I0416 00:54:43.133947   57518 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 00:54:43.134005   57518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 00:54:43.168844   57518 cri.go:89] found id: ""
	I0416 00:54:43.168868   57518 logs.go:276] 0 containers: []
	W0416 00:54:43.168875   57518 logs.go:278] No container was found matching "kube-scheduler"
	I0416 00:54:43.168881   57518 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 00:54:43.168924   57518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 00:54:43.202906   57518 cri.go:89] found id: ""
	I0416 00:54:43.202940   57518 logs.go:276] 0 containers: []
	W0416 00:54:43.202948   57518 logs.go:278] No container was found matching "kube-proxy"
	I0416 00:54:43.202954   57518 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 00:54:43.203003   57518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 00:54:43.237722   57518 cri.go:89] found id: ""
	I0416 00:54:43.237746   57518 logs.go:276] 0 containers: []
	W0416 00:54:43.237754   57518 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 00:54:43.237759   57518 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 00:54:43.237814   57518 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 00:54:43.272696   57518 cri.go:89] found id: ""
	I0416 00:54:43.272725   57518 logs.go:276] 0 containers: []
	W0416 00:54:43.272735   57518 logs.go:278] No container was found matching "kindnet"
	I0416 00:54:43.272746   57518 logs.go:123] Gathering logs for container status ...
	I0416 00:54:43.272759   57518 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 00:54:43.310592   57518 logs.go:123] Gathering logs for kubelet ...
	I0416 00:54:43.310620   57518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 00:54:43.360454   57518 logs.go:123] Gathering logs for dmesg ...
	I0416 00:54:43.360488   57518 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 00:54:43.374688   57518 logs.go:123] Gathering logs for describe nodes ...
	I0416 00:54:43.374716   57518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 00:54:43.490351   57518 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 00:54:43.490373   57518 logs.go:123] Gathering logs for CRI-O ...
	I0416 00:54:43.490384   57518 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0416 00:54:43.599016   57518 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0416 00:54:43.599081   57518 out.go:239] * 
	* 
	W0416 00:54:43.599148   57518 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 00:54:43.599180   57518 out.go:239] * 
	* 
	W0416 00:54:43.599980   57518 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 00:54:43.603216   57518 out.go:177] 
	W0416 00:54:43.604651   57518 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 00:54:43.604705   57518 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0416 00:54:43.604734   57518 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0416 00:54:43.606438   57518 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-800769 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-800769 -n old-k8s-version-800769
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-800769 -n old-k8s-version-800769: exit status 6 (241.427011ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:54:43.893672   61319 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-800769" does not appear in /home/jenkins/minikube-integration/18647-7542/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-800769" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (269.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-653942 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-653942 --alsologtostderr -v=3: exit status 82 (2m0.945822039s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-653942"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:52:10.619765   59209 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:52:10.619884   59209 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:52:10.619893   59209 out.go:304] Setting ErrFile to fd 2...
	I0416 00:52:10.619897   59209 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:52:10.620111   59209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:52:10.620347   59209 out.go:298] Setting JSON to false
	I0416 00:52:10.620442   59209 mustload.go:65] Loading cluster: default-k8s-diff-port-653942
	I0416 00:52:10.620832   59209 config.go:182] Loaded profile config "default-k8s-diff-port-653942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:52:10.620983   59209 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/config.json ...
	I0416 00:52:10.621181   59209 mustload.go:65] Loading cluster: default-k8s-diff-port-653942
	I0416 00:52:10.621299   59209 config.go:182] Loaded profile config "default-k8s-diff-port-653942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:52:10.621337   59209 stop.go:39] StopHost: default-k8s-diff-port-653942
	I0416 00:52:10.621733   59209 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:52:10.621787   59209 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:52:10.637440   59209 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35955
	I0416 00:52:10.637937   59209 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:52:10.638570   59209 main.go:141] libmachine: Using API Version  1
	I0416 00:52:10.638595   59209 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:52:10.638924   59209 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:52:10.641431   59209 out.go:177] * Stopping node "default-k8s-diff-port-653942"  ...
	I0416 00:52:10.642894   59209 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0416 00:52:10.642932   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 00:52:10.643146   59209 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0416 00:52:10.643206   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 00:52:10.645937   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 00:52:10.646347   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 01:51:17 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 00:52:10.646374   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 00:52:10.646474   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 00:52:10.646618   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 00:52:10.646835   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 00:52:10.646990   59209 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 00:52:10.749723   59209 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0416 00:52:10.807128   59209 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0416 00:52:10.867197   59209 main.go:141] libmachine: Stopping "default-k8s-diff-port-653942"...
	I0416 00:52:10.867234   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 00:52:10.868798   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Stop
	I0416 00:52:10.872318   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 0/120
	I0416 00:52:11.874239   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 1/120
	I0416 00:52:12.875458   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 2/120
	I0416 00:52:13.877379   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 3/120
	I0416 00:52:14.878620   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 4/120
	I0416 00:52:15.880830   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 5/120
	I0416 00:52:16.882437   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 6/120
	I0416 00:52:17.883962   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 7/120
	I0416 00:52:18.885731   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 8/120
	I0416 00:52:20.315139   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 9/120
	I0416 00:52:21.317587   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 10/120
	I0416 00:52:22.319318   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 11/120
	I0416 00:52:23.320726   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 12/120
	I0416 00:52:24.322003   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 13/120
	I0416 00:52:25.324183   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 14/120
	I0416 00:52:26.326499   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 15/120
	I0416 00:52:27.327786   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 16/120
	I0416 00:52:28.329358   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 17/120
	I0416 00:52:29.332136   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 18/120
	I0416 00:52:30.333811   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 19/120
	I0416 00:52:31.336344   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 20/120
	I0416 00:52:32.338909   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 21/120
	I0416 00:52:33.340481   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 22/120
	I0416 00:52:34.341756   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 23/120
	I0416 00:52:35.343064   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 24/120
	I0416 00:52:36.344374   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 25/120
	I0416 00:52:37.345721   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 26/120
	I0416 00:52:38.347021   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 27/120
	I0416 00:52:39.348334   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 28/120
	I0416 00:52:40.349716   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 29/120
	I0416 00:52:41.351663   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 30/120
	I0416 00:52:42.353128   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 31/120
	I0416 00:52:43.354643   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 32/120
	I0416 00:52:44.356108   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 33/120
	I0416 00:52:45.358224   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 34/120
	I0416 00:52:46.360132   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 35/120
	I0416 00:52:47.361359   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 36/120
	I0416 00:52:48.363724   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 37/120
	I0416 00:52:49.364821   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 38/120
	I0416 00:52:50.366363   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 39/120
	I0416 00:52:51.368347   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 40/120
	I0416 00:52:52.369789   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 41/120
	I0416 00:52:53.371769   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 42/120
	I0416 00:52:54.374125   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 43/120
	I0416 00:52:55.375800   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 44/120
	I0416 00:52:56.377712   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 45/120
	I0416 00:52:57.378979   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 46/120
	I0416 00:52:58.380390   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 47/120
	I0416 00:52:59.381953   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 48/120
	I0416 00:53:00.383735   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 49/120
	I0416 00:53:01.385984   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 50/120
	I0416 00:53:02.387404   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 51/120
	I0416 00:53:03.388628   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 52/120
	I0416 00:53:04.390037   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 53/120
	I0416 00:53:05.391838   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 54/120
	I0416 00:53:06.393202   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 55/120
	I0416 00:53:07.394514   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 56/120
	I0416 00:53:08.396335   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 57/120
	I0416 00:53:09.398020   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 58/120
	I0416 00:53:10.399436   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 59/120
	I0416 00:53:11.401439   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 60/120
	I0416 00:53:12.403787   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 61/120
	I0416 00:53:13.405938   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 62/120
	I0416 00:53:14.407318   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 63/120
	I0416 00:53:15.408944   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 64/120
	I0416 00:53:16.410712   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 65/120
	I0416 00:53:17.412141   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 66/120
	I0416 00:53:18.413653   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 67/120
	I0416 00:53:19.414882   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 68/120
	I0416 00:53:20.416262   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 69/120
	I0416 00:53:21.418845   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 70/120
	I0416 00:53:22.420560   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 71/120
	I0416 00:53:23.421915   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 72/120
	I0416 00:53:24.423241   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 73/120
	I0416 00:53:25.424471   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 74/120
	I0416 00:53:26.426262   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 75/120
	I0416 00:53:27.427582   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 76/120
	I0416 00:53:28.428834   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 77/120
	I0416 00:53:29.430236   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 78/120
	I0416 00:53:30.431664   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 79/120
	I0416 00:53:31.433519   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 80/120
	I0416 00:53:32.435598   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 81/120
	I0416 00:53:33.436931   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 82/120
	I0416 00:53:34.438698   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 83/120
	I0416 00:53:35.440194   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 84/120
	I0416 00:53:36.441933   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 85/120
	I0416 00:53:37.443279   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 86/120
	I0416 00:53:38.444841   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 87/120
	I0416 00:53:39.446351   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 88/120
	I0416 00:53:40.447845   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 89/120
	I0416 00:53:41.450561   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 90/120
	I0416 00:53:42.452773   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 91/120
	I0416 00:53:43.454134   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 92/120
	I0416 00:53:44.456293   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 93/120
	I0416 00:53:45.457786   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 94/120
	I0416 00:53:46.459163   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 95/120
	I0416 00:53:47.460700   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 96/120
	I0416 00:53:48.461977   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 97/120
	I0416 00:53:49.463441   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 98/120
	I0416 00:53:50.464629   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 99/120
	I0416 00:53:51.466892   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 100/120
	I0416 00:53:52.468151   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 101/120
	I0416 00:53:53.469575   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 102/120
	I0416 00:53:54.471669   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 103/120
	I0416 00:53:55.473337   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 104/120
	I0416 00:53:56.475976   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 105/120
	I0416 00:53:57.477326   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 106/120
	I0416 00:53:58.479794   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 107/120
	I0416 00:53:59.481364   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 108/120
	I0416 00:54:00.483576   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 109/120
	I0416 00:54:01.485673   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 110/120
	I0416 00:54:02.487011   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 111/120
	I0416 00:54:03.488284   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 112/120
	I0416 00:54:04.489724   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 113/120
	I0416 00:54:05.491127   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 114/120
	I0416 00:54:06.493313   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 115/120
	I0416 00:54:07.494584   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 116/120
	I0416 00:54:08.495948   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 117/120
	I0416 00:54:09.497055   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 118/120
	I0416 00:54:10.498595   59209 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for machine to stop 119/120
	I0416 00:54:11.499713   59209 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0416 00:54:11.499784   59209 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0416 00:54:11.501797   59209 out.go:177] 
	W0416 00:54:11.503157   59209 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0416 00:54:11.503182   59209 out.go:239] * 
	* 
	W0416 00:54:11.506945   59209 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 00:54:11.508283   59209 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-653942 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653942 -n default-k8s-diff-port-653942
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653942 -n default-k8s-diff-port-653942: exit status 3 (18.466233248s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:54:29.977521   60390 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.216:22: connect: no route to host
	E0416 00:54:29.977546   60390 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.216:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-653942" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-572602 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-572602 --alsologtostderr -v=3: exit status 82 (2m0.561418345s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-572602"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:52:16.119296   59293 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:52:16.119430   59293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:52:16.119454   59293 out.go:304] Setting ErrFile to fd 2...
	I0416 00:52:16.119468   59293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:52:16.119653   59293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:52:16.119882   59293 out.go:298] Setting JSON to false
	I0416 00:52:16.119955   59293 mustload.go:65] Loading cluster: no-preload-572602
	I0416 00:52:16.120281   59293 config.go:182] Loaded profile config "no-preload-572602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 00:52:16.120344   59293 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/config.json ...
	I0416 00:52:16.120505   59293 mustload.go:65] Loading cluster: no-preload-572602
	I0416 00:52:16.120602   59293 config.go:182] Loaded profile config "no-preload-572602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 00:52:16.120631   59293 stop.go:39] StopHost: no-preload-572602
	I0416 00:52:16.121013   59293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:52:16.121062   59293 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:52:16.135311   59293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40273
	I0416 00:52:16.135854   59293 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:52:16.136367   59293 main.go:141] libmachine: Using API Version  1
	I0416 00:52:16.136386   59293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:52:16.136740   59293 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:52:16.139276   59293 out.go:177] * Stopping node "no-preload-572602"  ...
	I0416 00:52:16.140477   59293 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0416 00:52:16.140507   59293 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:52:16.140793   59293 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0416 00:52:16.140824   59293 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:52:16.143837   59293 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:52:16.144296   59293 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:50:54 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:52:16.144323   59293 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:52:16.144573   59293 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:52:16.144739   59293 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:52:16.144912   59293 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:52:16.145090   59293 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:52:16.261518   59293 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0416 00:52:16.332602   59293 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0416 00:52:16.398275   59293 main.go:141] libmachine: Stopping "no-preload-572602"...
	I0416 00:52:16.398310   59293 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 00:52:16.400244   59293 main.go:141] libmachine: (no-preload-572602) Calling .Stop
	I0416 00:52:16.404188   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 0/120
	I0416 00:52:17.405544   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 1/120
	I0416 00:52:18.406811   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 2/120
	I0416 00:52:19.408593   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 3/120
	I0416 00:52:20.409936   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 4/120
	I0416 00:52:21.412296   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 5/120
	I0416 00:52:22.413913   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 6/120
	I0416 00:52:23.415636   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 7/120
	I0416 00:52:24.416838   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 8/120
	I0416 00:52:25.418309   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 9/120
	I0416 00:52:26.420652   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 10/120
	I0416 00:52:27.421957   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 11/120
	I0416 00:52:28.423125   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 12/120
	I0416 00:52:29.424335   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 13/120
	I0416 00:52:30.425735   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 14/120
	I0416 00:52:31.427780   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 15/120
	I0416 00:52:32.429335   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 16/120
	I0416 00:52:33.430517   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 17/120
	I0416 00:52:34.431845   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 18/120
	I0416 00:52:35.433055   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 19/120
	I0416 00:52:36.435286   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 20/120
	I0416 00:52:37.436551   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 21/120
	I0416 00:52:38.438144   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 22/120
	I0416 00:52:39.439425   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 23/120
	I0416 00:52:40.440811   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 24/120
	I0416 00:52:41.442938   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 25/120
	I0416 00:52:42.444402   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 26/120
	I0416 00:52:43.446530   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 27/120
	I0416 00:52:44.448087   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 28/120
	I0416 00:52:45.449504   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 29/120
	I0416 00:52:46.451746   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 30/120
	I0416 00:52:47.453081   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 31/120
	I0416 00:52:48.454507   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 32/120
	I0416 00:52:49.456278   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 33/120
	I0416 00:52:50.457889   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 34/120
	I0416 00:52:51.459825   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 35/120
	I0416 00:52:52.461754   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 36/120
	I0416 00:52:53.463830   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 37/120
	I0416 00:52:54.466048   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 38/120
	I0416 00:52:55.467386   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 39/120
	I0416 00:52:56.469419   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 40/120
	I0416 00:52:57.470806   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 41/120
	I0416 00:52:58.472148   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 42/120
	I0416 00:52:59.473599   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 43/120
	I0416 00:53:00.474868   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 44/120
	I0416 00:53:01.476652   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 45/120
	I0416 00:53:02.478583   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 46/120
	I0416 00:53:03.479826   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 47/120
	I0416 00:53:04.482228   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 48/120
	I0416 00:53:05.483603   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 49/120
	I0416 00:53:06.485802   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 50/120
	I0416 00:53:07.487630   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 51/120
	I0416 00:53:08.489531   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 52/120
	I0416 00:53:09.490912   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 53/120
	I0416 00:53:10.492225   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 54/120
	I0416 00:53:11.494123   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 55/120
	I0416 00:53:12.495881   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 56/120
	I0416 00:53:13.497290   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 57/120
	I0416 00:53:14.498776   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 58/120
	I0416 00:53:15.500063   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 59/120
	I0416 00:53:16.502074   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 60/120
	I0416 00:53:17.503403   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 61/120
	I0416 00:53:18.505060   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 62/120
	I0416 00:53:19.506757   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 63/120
	I0416 00:53:20.508215   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 64/120
	I0416 00:53:21.510545   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 65/120
	I0416 00:53:22.511800   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 66/120
	I0416 00:53:23.513147   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 67/120
	I0416 00:53:24.514549   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 68/120
	I0416 00:53:25.515861   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 69/120
	I0416 00:53:26.517235   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 70/120
	I0416 00:53:27.518529   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 71/120
	I0416 00:53:28.519899   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 72/120
	I0416 00:53:29.521240   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 73/120
	I0416 00:53:30.522715   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 74/120
	I0416 00:53:31.524452   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 75/120
	I0416 00:53:32.525985   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 76/120
	I0416 00:53:33.527968   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 77/120
	I0416 00:53:34.529543   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 78/120
	I0416 00:53:35.531704   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 79/120
	I0416 00:53:36.533969   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 80/120
	I0416 00:53:37.535526   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 81/120
	I0416 00:53:38.537929   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 82/120
	I0416 00:53:39.539394   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 83/120
	I0416 00:53:40.540782   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 84/120
	I0416 00:53:41.543132   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 85/120
	I0416 00:53:42.544539   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 86/120
	I0416 00:53:43.546172   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 87/120
	I0416 00:53:44.547841   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 88/120
	I0416 00:53:45.549215   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 89/120
	I0416 00:53:46.551479   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 90/120
	I0416 00:53:47.552949   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 91/120
	I0416 00:53:48.554196   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 92/120
	I0416 00:53:49.555593   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 93/120
	I0416 00:53:50.556927   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 94/120
	I0416 00:53:51.558800   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 95/120
	I0416 00:53:52.560002   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 96/120
	I0416 00:53:53.561772   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 97/120
	I0416 00:53:54.564074   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 98/120
	I0416 00:53:55.565392   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 99/120
	I0416 00:53:56.567395   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 100/120
	I0416 00:53:57.568603   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 101/120
	I0416 00:53:58.570099   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 102/120
	I0416 00:53:59.571342   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 103/120
	I0416 00:54:00.572654   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 104/120
	I0416 00:54:01.574575   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 105/120
	I0416 00:54:02.575809   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 106/120
	I0416 00:54:03.577197   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 107/120
	I0416 00:54:04.578850   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 108/120
	I0416 00:54:05.580179   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 109/120
	I0416 00:54:06.582517   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 110/120
	I0416 00:54:07.583957   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 111/120
	I0416 00:54:08.585224   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 112/120
	I0416 00:54:09.586509   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 113/120
	I0416 00:54:10.588831   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 114/120
	I0416 00:54:11.590661   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 115/120
	I0416 00:54:12.592140   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 116/120
	I0416 00:54:13.594236   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 117/120
	I0416 00:54:14.595627   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 118/120
	I0416 00:54:15.597011   59293 main.go:141] libmachine: (no-preload-572602) Waiting for machine to stop 119/120
	I0416 00:54:16.612924   59293 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0416 00:54:16.612975   59293 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0416 00:54:16.614847   59293 out.go:177] 
	W0416 00:54:16.616235   59293 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0416 00:54:16.616255   59293 out.go:239] * 
	* 
	W0416 00:54:16.619190   59293 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 00:54:16.621830   59293 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-572602 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-572602 -n no-preload-572602
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-572602 -n no-preload-572602: exit status 3 (18.473945745s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:54:35.097543   60749 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	E0416 00:54:35.097567   60749 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-572602" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653942 -n default-k8s-diff-port-653942
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653942 -n default-k8s-diff-port-653942: exit status 3 (3.167917923s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:54:33.145506   61073 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.216:22: connect: no route to host
	E0416 00:54:33.145525   61073 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.216:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-653942 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-653942 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153290003s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.216:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-653942 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653942 -n default-k8s-diff-port-653942
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653942 -n default-k8s-diff-port-653942: exit status 3 (3.062961897s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:54:42.361548   61209 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.216:22: connect: no route to host
	E0416 00:54:42.361576   61209 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.216:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-653942" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-572602 -n no-preload-572602
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-572602 -n no-preload-572602: exit status 3 (3.167361643s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:54:38.265493   61145 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	E0416 00:54:38.265510   61145 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-572602 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-572602 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153306908s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-572602 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-572602 -n no-preload-572602
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-572602 -n no-preload-572602: exit status 3 (3.066230384s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:54:47.485413   61418 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host
	E0416 00:54:47.485431   61418 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.121:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-572602" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-800769 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-800769 create -f testdata/busybox.yaml: exit status 1 (43.169348ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-800769" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-800769 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-800769 -n old-k8s-version-800769
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-800769 -n old-k8s-version-800769: exit status 6 (239.312526ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:54:44.177976   61359 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-800769" does not appear in /home/jenkins/minikube-integration/18647-7542/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-800769" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-800769 -n old-k8s-version-800769
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-800769 -n old-k8s-version-800769: exit status 6 (237.460616ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:54:44.413558   61388 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-800769" does not appear in /home/jenkins/minikube-integration/18647-7542/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-800769" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (94.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-800769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-800769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m34.594249175s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-800769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-800769 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-800769 describe deploy/metrics-server -n kube-system: exit status 1 (42.550942ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-800769" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-800769 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-800769 -n old-k8s-version-800769
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-800769 -n old-k8s-version-800769: exit status 6 (232.158634ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:56:19.284761   62022 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-800769" does not appear in /home/jenkins/minikube-integration/18647-7542/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-800769" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (94.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-617092 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-617092 --alsologtostderr -v=3: exit status 82 (2m0.505428158s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-617092"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:56:11.357348   61960 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:56:11.357448   61960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:56:11.357456   61960 out.go:304] Setting ErrFile to fd 2...
	I0416 00:56:11.357461   61960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:56:11.357652   61960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:56:11.357875   61960 out.go:298] Setting JSON to false
	I0416 00:56:11.357952   61960 mustload.go:65] Loading cluster: embed-certs-617092
	I0416 00:56:11.358242   61960 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:56:11.358310   61960 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/config.json ...
	I0416 00:56:11.358464   61960 mustload.go:65] Loading cluster: embed-certs-617092
	I0416 00:56:11.358563   61960 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:56:11.358598   61960 stop.go:39] StopHost: embed-certs-617092
	I0416 00:56:11.358955   61960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:56:11.358993   61960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:56:11.374174   61960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32877
	I0416 00:56:11.374582   61960 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:56:11.375090   61960 main.go:141] libmachine: Using API Version  1
	I0416 00:56:11.375110   61960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:56:11.375466   61960 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:56:11.377889   61960 out.go:177] * Stopping node "embed-certs-617092"  ...
	I0416 00:56:11.379318   61960 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0416 00:56:11.379355   61960 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 00:56:11.379577   61960 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0416 00:56:11.379604   61960 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 00:56:11.382159   61960 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 00:56:11.382565   61960 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 00:56:11.382601   61960 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 00:56:11.382767   61960 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 00:56:11.382948   61960 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 00:56:11.383123   61960 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 00:56:11.383285   61960 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 00:56:11.479086   61960 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0416 00:56:11.549491   61960 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0416 00:56:11.612699   61960 main.go:141] libmachine: Stopping "embed-certs-617092"...
	I0416 00:56:11.612739   61960 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 00:56:11.614341   61960 main.go:141] libmachine: (embed-certs-617092) Calling .Stop
	I0416 00:56:11.617714   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 0/120
	I0416 00:56:12.619107   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 1/120
	I0416 00:56:13.620540   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 2/120
	I0416 00:56:14.621777   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 3/120
	I0416 00:56:15.623395   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 4/120
	I0416 00:56:16.625414   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 5/120
	I0416 00:56:17.627577   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 6/120
	I0416 00:56:18.629126   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 7/120
	I0416 00:56:19.630550   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 8/120
	I0416 00:56:20.631773   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 9/120
	I0416 00:56:21.634012   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 10/120
	I0416 00:56:22.635373   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 11/120
	I0416 00:56:23.636849   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 12/120
	I0416 00:56:24.638350   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 13/120
	I0416 00:56:25.639887   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 14/120
	I0416 00:56:26.641985   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 15/120
	I0416 00:56:27.643331   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 16/120
	I0416 00:56:28.644834   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 17/120
	I0416 00:56:29.646128   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 18/120
	I0416 00:56:30.647498   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 19/120
	I0416 00:56:31.649055   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 20/120
	I0416 00:56:32.650521   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 21/120
	I0416 00:56:33.651999   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 22/120
	I0416 00:56:34.653575   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 23/120
	I0416 00:56:35.654882   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 24/120
	I0416 00:56:36.656865   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 25/120
	I0416 00:56:37.658120   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 26/120
	I0416 00:56:38.659388   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 27/120
	I0416 00:56:39.660731   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 28/120
	I0416 00:56:40.662158   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 29/120
	I0416 00:56:41.664181   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 30/120
	I0416 00:56:42.665802   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 31/120
	I0416 00:56:43.667200   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 32/120
	I0416 00:56:44.668599   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 33/120
	I0416 00:56:45.669883   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 34/120
	I0416 00:56:46.671907   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 35/120
	I0416 00:56:47.673152   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 36/120
	I0416 00:56:48.674478   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 37/120
	I0416 00:56:49.675837   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 38/120
	I0416 00:56:50.677231   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 39/120
	I0416 00:56:51.679324   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 40/120
	I0416 00:56:52.680586   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 41/120
	I0416 00:56:53.682044   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 42/120
	I0416 00:56:54.683409   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 43/120
	I0416 00:56:55.684861   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 44/120
	I0416 00:56:56.686877   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 45/120
	I0416 00:56:57.688350   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 46/120
	I0416 00:56:58.689654   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 47/120
	I0416 00:56:59.691077   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 48/120
	I0416 00:57:00.692526   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 49/120
	I0416 00:57:01.694813   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 50/120
	I0416 00:57:02.696226   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 51/120
	I0416 00:57:03.697678   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 52/120
	I0416 00:57:04.699228   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 53/120
	I0416 00:57:05.700576   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 54/120
	I0416 00:57:06.702813   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 55/120
	I0416 00:57:07.704262   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 56/120
	I0416 00:57:08.705983   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 57/120
	I0416 00:57:09.707376   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 58/120
	I0416 00:57:10.708890   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 59/120
	I0416 00:57:11.711185   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 60/120
	I0416 00:57:12.712768   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 61/120
	I0416 00:57:13.714154   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 62/120
	I0416 00:57:14.715641   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 63/120
	I0416 00:57:15.716933   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 64/120
	I0416 00:57:16.719133   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 65/120
	I0416 00:57:17.720494   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 66/120
	I0416 00:57:18.721968   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 67/120
	I0416 00:57:19.723364   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 68/120
	I0416 00:57:20.724800   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 69/120
	I0416 00:57:21.726089   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 70/120
	I0416 00:57:22.727359   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 71/120
	I0416 00:57:23.728824   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 72/120
	I0416 00:57:24.730087   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 73/120
	I0416 00:57:25.731573   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 74/120
	I0416 00:57:26.733828   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 75/120
	I0416 00:57:27.735111   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 76/120
	I0416 00:57:28.736538   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 77/120
	I0416 00:57:29.737961   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 78/120
	I0416 00:57:30.739347   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 79/120
	I0416 00:57:31.741482   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 80/120
	I0416 00:57:32.742790   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 81/120
	I0416 00:57:33.744114   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 82/120
	I0416 00:57:34.745713   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 83/120
	I0416 00:57:35.746993   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 84/120
	I0416 00:57:36.748988   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 85/120
	I0416 00:57:37.750405   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 86/120
	I0416 00:57:38.751746   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 87/120
	I0416 00:57:39.752999   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 88/120
	I0416 00:57:40.754560   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 89/120
	I0416 00:57:41.756754   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 90/120
	I0416 00:57:42.758287   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 91/120
	I0416 00:57:43.759688   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 92/120
	I0416 00:57:44.760975   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 93/120
	I0416 00:57:45.762353   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 94/120
	I0416 00:57:46.764153   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 95/120
	I0416 00:57:47.765364   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 96/120
	I0416 00:57:48.766795   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 97/120
	I0416 00:57:49.768126   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 98/120
	I0416 00:57:50.769527   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 99/120
	I0416 00:57:51.770770   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 100/120
	I0416 00:57:52.772247   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 101/120
	I0416 00:57:53.773747   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 102/120
	I0416 00:57:54.775431   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 103/120
	I0416 00:57:55.776950   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 104/120
	I0416 00:57:56.778985   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 105/120
	I0416 00:57:57.780293   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 106/120
	I0416 00:57:58.781792   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 107/120
	I0416 00:57:59.783113   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 108/120
	I0416 00:58:00.784306   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 109/120
	I0416 00:58:01.786629   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 110/120
	I0416 00:58:02.788074   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 111/120
	I0416 00:58:03.789548   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 112/120
	I0416 00:58:04.791511   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 113/120
	I0416 00:58:05.792927   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 114/120
	I0416 00:58:06.794860   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 115/120
	I0416 00:58:07.796190   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 116/120
	I0416 00:58:08.797823   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 117/120
	I0416 00:58:09.799073   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 118/120
	I0416 00:58:10.800444   61960 main.go:141] libmachine: (embed-certs-617092) Waiting for machine to stop 119/120
	I0416 00:58:11.801222   61960 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0416 00:58:11.801283   61960 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0416 00:58:11.803690   61960 out.go:177] 
	W0416 00:58:11.805082   61960 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0416 00:58:11.805100   61960 out.go:239] * 
	* 
	W0416 00:58:11.807702   61960 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 00:58:11.808923   61960 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-617092 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-617092 -n embed-certs-617092
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-617092 -n embed-certs-617092: exit status 3 (18.550604177s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:58:30.361524   62540 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.225:22: connect: no route to host
	E0416 00:58:30.361545   62540 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.225:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-617092" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (714.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-800769 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0416 00:57:20.169050   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-800769 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m50.981330938s)

                                                
                                                
-- stdout --
	* [old-k8s-version-800769] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-800769" primary control-plane node in "old-k8s-version-800769" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-800769" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:56:22.099384   62139 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:56:22.099490   62139 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:56:22.099515   62139 out.go:304] Setting ErrFile to fd 2...
	I0416 00:56:22.099521   62139 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:56:22.099752   62139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:56:22.100329   62139 out.go:298] Setting JSON to false
	I0416 00:56:22.101300   62139 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5926,"bootTime":1713223056,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 00:56:22.101367   62139 start.go:139] virtualization: kvm guest
	I0416 00:56:22.103491   62139 out.go:177] * [old-k8s-version-800769] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 00:56:22.104829   62139 notify.go:220] Checking for updates...
	I0416 00:56:22.104838   62139 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 00:56:22.106152   62139 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 00:56:22.107511   62139 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 00:56:22.108794   62139 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:56:22.110128   62139 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 00:56:22.111407   62139 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 00:56:22.113110   62139 config.go:182] Loaded profile config "old-k8s-version-800769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0416 00:56:22.113548   62139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:56:22.113600   62139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:56:22.127916   62139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45009
	I0416 00:56:22.128333   62139 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:56:22.128798   62139 main.go:141] libmachine: Using API Version  1
	I0416 00:56:22.128817   62139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:56:22.129184   62139 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:56:22.129321   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:56:22.131203   62139 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0416 00:56:22.132602   62139 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 00:56:22.132864   62139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:56:22.132893   62139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:56:22.147177   62139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46701
	I0416 00:56:22.147552   62139 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:56:22.147940   62139 main.go:141] libmachine: Using API Version  1
	I0416 00:56:22.147960   62139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:56:22.148309   62139 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:56:22.148507   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:56:22.182012   62139 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 00:56:22.183486   62139 start.go:297] selected driver: kvm2
	I0416 00:56:22.183512   62139 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.98 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:56:22.183622   62139 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 00:56:22.184289   62139 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:56:22.184376   62139 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 00:56:22.198659   62139 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 00:56:22.199112   62139 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 00:56:22.199193   62139 cni.go:84] Creating CNI manager for ""
	I0416 00:56:22.199212   62139 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:56:22.199271   62139 start.go:340] cluster config:
	{Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.98 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:56:22.199425   62139 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:56:22.201418   62139 out.go:177] * Starting "old-k8s-version-800769" primary control-plane node in "old-k8s-version-800769" cluster
	I0416 00:56:22.202781   62139 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 00:56:22.202823   62139 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0416 00:56:22.202833   62139 cache.go:56] Caching tarball of preloaded images
	I0416 00:56:22.202897   62139 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 00:56:22.202910   62139 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0416 00:56:22.203099   62139 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/config.json ...
	I0416 00:56:22.203344   62139 start.go:360] acquireMachinesLock for old-k8s-version-800769: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:59:40.669954   62139 start.go:364] duration metric: took 3m18.466569456s to acquireMachinesLock for "old-k8s-version-800769"
	I0416 00:59:40.670015   62139 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:59:40.670038   62139 fix.go:54] fixHost starting: 
	I0416 00:59:40.670411   62139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:59:40.670448   62139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:59:40.686269   62139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39043
	I0416 00:59:40.686633   62139 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:59:40.687125   62139 main.go:141] libmachine: Using API Version  1
	I0416 00:59:40.687162   62139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:59:40.687481   62139 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:59:40.687672   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:40.687838   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetState
	I0416 00:59:40.689108   62139 fix.go:112] recreateIfNeeded on old-k8s-version-800769: state=Stopped err=<nil>
	I0416 00:59:40.689132   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	W0416 00:59:40.689286   62139 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:59:40.691869   62139 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-800769" ...
	I0416 00:59:40.693292   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .Start
	I0416 00:59:40.693450   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring networks are active...
	I0416 00:59:40.694152   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring network default is active
	I0416 00:59:40.694457   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring network mk-old-k8s-version-800769 is active
	I0416 00:59:40.694883   62139 main.go:141] libmachine: (old-k8s-version-800769) Getting domain xml...
	I0416 00:59:40.695720   62139 main.go:141] libmachine: (old-k8s-version-800769) Creating domain...
	I0416 00:59:41.913001   62139 main.go:141] libmachine: (old-k8s-version-800769) Waiting to get IP...
	I0416 00:59:41.913874   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:41.914260   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:41.914318   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:41.914237   63071 retry.go:31] will retry after 261.032707ms: waiting for machine to come up
	I0416 00:59:42.176660   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.177053   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.177084   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.177031   63071 retry.go:31] will retry after 268.951362ms: waiting for machine to come up
	I0416 00:59:42.447724   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.448132   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.448159   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.448097   63071 retry.go:31] will retry after 293.793417ms: waiting for machine to come up
	I0416 00:59:42.743375   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.743845   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.743874   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.743801   63071 retry.go:31] will retry after 494.163372ms: waiting for machine to come up
	I0416 00:59:43.239314   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:43.239761   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:43.239790   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:43.239708   63071 retry.go:31] will retry after 698.851999ms: waiting for machine to come up
	I0416 00:59:43.939998   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:43.940577   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:43.940607   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:43.940535   63071 retry.go:31] will retry after 764.693004ms: waiting for machine to come up
	I0416 00:59:44.706335   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:44.706673   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:44.706724   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:44.706626   63071 retry.go:31] will retry after 874.082115ms: waiting for machine to come up
	I0416 00:59:45.581896   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:45.582331   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:45.582361   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:45.582280   63071 retry.go:31] will retry after 966.259345ms: waiting for machine to come up
	I0416 00:59:46.550671   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:46.551111   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:46.551140   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:46.551062   63071 retry.go:31] will retry after 1.191034468s: waiting for machine to come up
	I0416 00:59:47.744187   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:47.744683   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:47.744712   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:47.744637   63071 retry.go:31] will retry after 2.263605663s: waiting for machine to come up
	I0416 00:59:50.011136   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:50.011605   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:50.011632   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:50.011566   63071 retry.go:31] will retry after 2.648982849s: waiting for machine to come up
	I0416 00:59:52.662443   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:52.662852   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:52.662883   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:52.662815   63071 retry.go:31] will retry after 2.183508059s: waiting for machine to come up
	I0416 00:59:54.849225   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:54.849701   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:54.849734   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:54.849649   63071 retry.go:31] will retry after 3.201585234s: waiting for machine to come up
	I0416 00:59:58.052613   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.053048   62139 main.go:141] libmachine: (old-k8s-version-800769) Found IP for machine: 192.168.83.98
	I0416 00:59:58.053073   62139 main.go:141] libmachine: (old-k8s-version-800769) Reserving static IP address...
	I0416 00:59:58.053089   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has current primary IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.053517   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "old-k8s-version-800769", mac: "52:54:00:a1:ad:da", ip: "192.168.83.98"} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.053549   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | skip adding static IP to network mk-old-k8s-version-800769 - found existing host DHCP lease matching {name: "old-k8s-version-800769", mac: "52:54:00:a1:ad:da", ip: "192.168.83.98"}
	I0416 00:59:58.053569   62139 main.go:141] libmachine: (old-k8s-version-800769) Reserved static IP address: 192.168.83.98
	I0416 00:59:58.053587   62139 main.go:141] libmachine: (old-k8s-version-800769) Waiting for SSH to be available...
	I0416 00:59:58.053602   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Getting to WaitForSSH function...
	I0416 00:59:58.055598   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.055907   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.055941   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.056038   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Using SSH client type: external
	I0416 00:59:58.056088   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa (-rw-------)
	I0416 00:59:58.056132   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.98 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 00:59:58.056149   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | About to run SSH command:
	I0416 00:59:58.056162   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | exit 0
	I0416 00:59:58.185675   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | SSH cmd err, output: <nil>: 
	I0416 00:59:58.186055   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetConfigRaw
	I0416 00:59:58.186802   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:58.189772   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.190219   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.190257   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.190448   62139 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/config.json ...
	I0416 00:59:58.190666   62139 machine.go:94] provisionDockerMachine start ...
	I0416 00:59:58.190685   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:58.190902   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.193570   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.193954   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.193982   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.194139   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.194337   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.194492   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.194636   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.194786   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.195041   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.195056   62139 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:59:58.321824   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 00:59:58.321857   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.322146   62139 buildroot.go:166] provisioning hostname "old-k8s-version-800769"
	I0416 00:59:58.322175   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.322381   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.324941   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.325288   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.325316   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.325423   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.325613   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.325776   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.325936   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.326109   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.326322   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.326339   62139 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-800769 && echo "old-k8s-version-800769" | sudo tee /etc/hostname
	I0416 00:59:58.455194   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-800769
	
	I0416 00:59:58.455236   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.458021   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.458423   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.458458   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.458662   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.458848   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.459013   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.459162   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.459353   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.459507   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.459524   62139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-800769' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-800769/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-800769' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:59:58.587318   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:59:58.587351   62139 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:59:58.587391   62139 buildroot.go:174] setting up certificates
	I0416 00:59:58.587400   62139 provision.go:84] configureAuth start
	I0416 00:59:58.587413   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.587686   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:58.590415   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.590739   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.590778   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.590880   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.593282   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.593728   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.593759   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.593931   62139 provision.go:143] copyHostCerts
	I0416 00:59:58.593988   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:59:58.594007   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:59:58.594079   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:59:58.594213   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:59:58.594222   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:59:58.594263   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:59:58.594372   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:59:58.594383   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:59:58.594408   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:59:58.594470   62139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-800769 san=[127.0.0.1 192.168.83.98 localhost minikube old-k8s-version-800769]
	I0416 00:59:58.692127   62139 provision.go:177] copyRemoteCerts
	I0416 00:59:58.692197   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:59:58.692232   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.694858   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.695231   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.695278   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.695507   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.695693   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.695852   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.695994   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:58.783458   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:59:58.811124   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0416 00:59:58.836495   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 00:59:58.862044   62139 provision.go:87] duration metric: took 274.632117ms to configureAuth
	I0416 00:59:58.862068   62139 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:59:58.862278   62139 config.go:182] Loaded profile config "old-k8s-version-800769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0416 00:59:58.862361   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.865352   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.865795   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.865829   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.866043   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.866228   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.866435   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.866625   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.866805   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.867008   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.867026   62139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:59:59.143874   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:59:59.143900   62139 machine.go:97] duration metric: took 953.218972ms to provisionDockerMachine
	I0416 00:59:59.143914   62139 start.go:293] postStartSetup for "old-k8s-version-800769" (driver="kvm2")
	I0416 00:59:59.143927   62139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:59:59.143972   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.144277   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:59:59.144302   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.147021   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.147355   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.147385   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.147649   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.147871   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.148036   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.148174   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.236981   62139 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:59:59.241388   62139 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:59:59.241411   62139 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:59:59.241469   62139 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:59:59.241534   62139 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:59:59.241619   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:59:59.251688   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:59:59.275189   62139 start.go:296] duration metric: took 131.262042ms for postStartSetup
	I0416 00:59:59.275227   62139 fix.go:56] duration metric: took 18.605201288s for fixHost
	I0416 00:59:59.275250   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.277804   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.278153   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.278186   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.278341   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.278581   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.278741   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.278908   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.279068   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:59.279233   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:59.279243   62139 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0416 00:59:59.394108   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229199.360202150
	
	I0416 00:59:59.394141   62139 fix.go:216] guest clock: 1713229199.360202150
	I0416 00:59:59.394152   62139 fix.go:229] Guest: 2024-04-16 00:59:59.36020215 +0000 UTC Remote: 2024-04-16 00:59:59.27523174 +0000 UTC m=+217.222314955 (delta=84.97041ms)
	I0416 00:59:59.394211   62139 fix.go:200] guest clock delta is within tolerance: 84.97041ms
	I0416 00:59:59.394218   62139 start.go:83] releasing machines lock for "old-k8s-version-800769", held for 18.724230851s
	I0416 00:59:59.394252   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.394554   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:59.397241   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.397670   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.397703   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.397897   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398460   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398650   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398740   62139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:59:59.398782   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.399049   62139 ssh_runner.go:195] Run: cat /version.json
	I0416 00:59:59.399072   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.401397   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401656   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401802   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.401825   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401964   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.402017   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.402089   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.402173   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.402248   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.402320   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.402376   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.402430   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.402577   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.402638   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.481834   62139 ssh_runner.go:195] Run: systemctl --version
	I0416 00:59:59.516372   62139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:59:59.666722   62139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 00:59:59.674165   62139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:59:59.674226   62139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:59:59.695545   62139 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 00:59:59.695573   62139 start.go:494] detecting cgroup driver to use...
	I0416 00:59:59.695646   62139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:59:59.715091   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:59:59.732004   62139 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:59:59.732060   62139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:59:59.753217   62139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:59:59.768513   62139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:59:59.898693   62139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:00:00.066535   62139 docker.go:233] disabling docker service ...
	I0416 01:00:00.066607   62139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:00:00.084512   62139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:00:00.097714   62139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:00:00.232901   62139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:00:00.378379   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:00:00.395191   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:00:00.416631   62139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0416 01:00:00.416695   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.428712   62139 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:00:00.428774   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.442687   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.454631   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.466151   62139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:00:00.478459   62139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:00:00.489957   62139 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:00:00.490035   62139 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:00:00.506087   62139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:00:00.518100   62139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:00.676317   62139 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:00:00.869766   62139 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:00:00.869855   62139 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:00:00.875363   62139 start.go:562] Will wait 60s for crictl version
	I0416 01:00:00.875424   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:00.880947   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:00:00.924780   62139 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:00:00.924852   62139 ssh_runner.go:195] Run: crio --version
	I0416 01:00:00.958390   62139 ssh_runner.go:195] Run: crio --version
	I0416 01:00:00.993114   62139 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0416 01:00:00.994513   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 01:00:00.997571   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 01:00:00.998032   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 01:00:00.998065   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 01:00:00.998273   62139 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0416 01:00:01.002750   62139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:01.015709   62139 kubeadm.go:877] updating cluster {Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.98 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:00:01.015810   62139 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 01:00:01.015853   62139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:01.063257   62139 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 01:00:01.063331   62139 ssh_runner.go:195] Run: which lz4
	I0416 01:00:01.067973   62139 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0416 01:00:01.072369   62139 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:00:01.072400   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0416 01:00:02.891638   62139 crio.go:462] duration metric: took 1.823700483s to copy over tarball
	I0416 01:00:02.891723   62139 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:00:06.137253   62139 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.245498092s)
	I0416 01:00:06.137283   62139 crio.go:469] duration metric: took 3.245614896s to extract the tarball
	I0416 01:00:06.137292   62139 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:00:06.181260   62139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:06.224646   62139 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 01:00:06.224682   62139 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 01:00:06.224762   62139 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:06.224815   62139 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.224851   62139 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.224821   62139 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.224768   62139 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.224797   62139 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.225121   62139 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0416 01:00:06.224797   62139 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.226485   62139 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.226505   62139 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0416 01:00:06.226516   62139 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.226580   62139 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.226729   62139 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.227296   62139 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:06.227311   62139 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.227315   62139 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.397101   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.431142   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0416 01:00:06.433152   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.433876   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.434844   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.441478   62139 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0416 01:00:06.441524   62139 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.441558   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.450391   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.506375   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.540080   62139 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0416 01:00:06.540250   62139 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0416 01:00:06.540121   62139 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0416 01:00:06.540299   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.540305   62139 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.540343   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613287   62139 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0416 01:00:06.613305   62139 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0416 01:00:06.613334   62139 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.613339   62139 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.613381   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613381   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613490   62139 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0416 01:00:06.613522   62139 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.613569   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613384   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.613620   62139 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0416 01:00:06.613657   62139 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.613716   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0416 01:00:06.613722   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613665   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.619153   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.638065   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.734018   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0416 01:00:06.734134   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.749273   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0416 01:00:06.750536   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0416 01:00:06.750576   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.750655   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0416 01:00:06.750594   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0416 01:00:06.790321   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0416 01:00:06.803564   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0416 01:00:07.060494   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:07.207951   62139 cache_images.go:92] duration metric: took 983.249797ms to LoadCachedImages
	W0416 01:00:07.286619   62139 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0416 01:00:07.286654   62139 kubeadm.go:928] updating node { 192.168.83.98 8443 v1.20.0 crio true true} ...
	I0416 01:00:07.286815   62139 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-800769 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.98
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 01:00:07.286916   62139 ssh_runner.go:195] Run: crio config
	I0416 01:00:07.338016   62139 cni.go:84] Creating CNI manager for ""
	I0416 01:00:07.338038   62139 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:07.338049   62139 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:00:07.338072   62139 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.98 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-800769 NodeName:old-k8s-version-800769 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.98"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.98 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0416 01:00:07.338207   62139 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.98
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-800769"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.98
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.98"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:00:07.338273   62139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0416 01:00:07.349347   62139 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:00:07.349432   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:00:07.361389   62139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0416 01:00:07.379714   62139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:00:07.397953   62139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0416 01:00:07.416901   62139 ssh_runner.go:195] Run: grep 192.168.83.98	control-plane.minikube.internal$ /etc/hosts
	I0416 01:00:07.420904   62139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.98	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:07.436685   62139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:07.567945   62139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:00:07.587829   62139 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769 for IP: 192.168.83.98
	I0416 01:00:07.587858   62139 certs.go:194] generating shared ca certs ...
	I0416 01:00:07.587880   62139 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:07.588087   62139 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:00:07.588155   62139 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:00:07.588171   62139 certs.go:256] generating profile certs ...
	I0416 01:00:07.606683   62139 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.key
	I0416 01:00:07.606823   62139 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key.efc35655
	I0416 01:00:07.606872   62139 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.key
	I0416 01:00:07.607040   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:00:07.607087   62139 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:00:07.607114   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:00:07.607172   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:00:07.607204   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:00:07.607234   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:00:07.607283   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:07.608127   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:00:07.658868   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:00:07.703378   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:00:07.743203   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:00:07.787335   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0416 01:00:07.823630   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 01:00:07.854198   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:00:07.881813   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 01:00:07.909698   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:00:07.935341   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:00:07.963102   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:00:07.989657   62139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:00:08.009203   62139 ssh_runner.go:195] Run: openssl version
	I0416 01:00:08.015677   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:00:08.027077   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.032096   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.032179   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.038672   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:00:08.054256   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:00:08.065287   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.069846   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.069907   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.075899   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:00:08.087272   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:00:08.098494   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.103168   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.103246   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.109202   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:00:08.120143   62139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:00:08.125027   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 01:00:08.131716   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 01:00:08.138024   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 01:00:08.144291   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 01:00:08.150741   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 01:00:08.156931   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 01:00:08.163147   62139 kubeadm.go:391] StartCluster: {Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.98 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:00:08.163254   62139 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:00:08.163298   62139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:08.201923   62139 cri.go:89] found id: ""
	I0416 01:00:08.202000   62139 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 01:00:08.212441   62139 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 01:00:08.212462   62139 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 01:00:08.212467   62139 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 01:00:08.212514   62139 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 01:00:08.222702   62139 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 01:00:08.223670   62139 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-800769" does not appear in /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:00:08.224332   62139 kubeconfig.go:62] /home/jenkins/minikube-integration/18647-7542/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-800769" cluster setting kubeconfig missing "old-k8s-version-800769" context setting]
	I0416 01:00:08.225340   62139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:08.343775   62139 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 01:00:08.355942   62139 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.98
	I0416 01:00:08.355986   62139 kubeadm.go:1154] stopping kube-system containers ...
	I0416 01:00:08.356007   62139 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 01:00:08.356081   62139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:08.398894   62139 cri.go:89] found id: ""
	I0416 01:00:08.398976   62139 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 01:00:08.416343   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:00:08.426901   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:00:08.426926   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:00:08.426981   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:00:08.437870   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:00:08.437942   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:00:08.452256   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:00:08.466375   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:00:08.466447   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:00:08.477246   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:00:08.487547   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:00:08.487615   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:00:08.504171   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:00:08.515265   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:00:08.515332   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:00:08.525186   62139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:00:08.535381   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:08.657456   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.504421   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.781478   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.950913   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:10.044772   62139 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:10.044871   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:10.545002   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:11.045664   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:11.545083   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:12.045593   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:12.545696   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:13.045935   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:13.545810   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.045682   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.545524   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:15.045110   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:15.545792   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:16.045843   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:16.545684   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:17.045401   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:17.544937   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:18.045282   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:18.545707   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:19.045821   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:19.545868   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.045069   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.545134   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:21.045607   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:21.545366   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:22.044998   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:22.545403   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:23.045303   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:23.544984   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:24.045882   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:24.545194   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:25.045010   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:25.545278   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:26.045702   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:26.545233   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:27.045814   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:27.545025   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:28.045752   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:28.545833   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.045264   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.545316   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:30.045594   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:30.545046   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:31.045139   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:31.545251   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:32.045710   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:32.545963   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.045020   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.545657   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:34.045706   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:34.544972   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:35.045252   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:35.545087   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:36.045080   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:36.545787   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:37.045046   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:37.545192   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:38.045346   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:38.545599   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:39.045109   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:39.545360   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.045058   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.545745   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:41.045943   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:41.545900   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:42.045807   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:42.545278   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:43.045894   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:43.545886   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.044964   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.544997   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:45.045340   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:45.545257   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:46.045108   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:46.544994   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:47.045987   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:47.545567   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.045898   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.545631   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:49.045678   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:49.545274   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:50.045281   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:50.545926   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:51.045076   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:51.545303   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:52.045271   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:52.545407   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.044961   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.545290   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:54.044994   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:54.545292   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:55.045285   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:55.545909   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:56.045029   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:56.545343   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:57.044988   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:57.545333   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.045305   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.545871   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:59.045432   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:59.545000   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:00.045001   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:00.545855   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:01.045812   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:01.545477   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:02.045635   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:02.545690   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:03.045754   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:03.544965   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:04.045062   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:04.545196   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:05.045986   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:05.545246   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:06.045853   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:06.545863   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:07.045209   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:07.544952   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:08.045290   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:08.545296   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:09.045795   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:09.545932   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:10.045124   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:10.045209   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:10.087200   62139 cri.go:89] found id: ""
	I0416 01:01:10.087229   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.087237   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:10.087243   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:10.087300   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:10.126194   62139 cri.go:89] found id: ""
	I0416 01:01:10.126218   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.126225   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:10.126230   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:10.126275   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:10.165238   62139 cri.go:89] found id: ""
	I0416 01:01:10.165271   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.165282   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:10.165290   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:10.165357   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:10.202896   62139 cri.go:89] found id: ""
	I0416 01:01:10.202934   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.202945   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:10.202952   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:10.203015   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:10.243576   62139 cri.go:89] found id: ""
	I0416 01:01:10.243605   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.243613   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:10.243619   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:10.243667   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:10.278637   62139 cri.go:89] found id: ""
	I0416 01:01:10.278661   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.278669   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:10.278674   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:10.278726   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:10.316811   62139 cri.go:89] found id: ""
	I0416 01:01:10.316844   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.316852   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:10.316857   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:10.316914   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:10.359934   62139 cri.go:89] found id: ""
	I0416 01:01:10.359960   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.359967   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:10.359975   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:10.359987   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:10.413082   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:10.413119   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:10.428605   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:10.428632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:10.552536   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:10.552561   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:10.552578   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:10.615054   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:10.615091   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:13.160749   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:13.178449   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:13.178505   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:13.224192   62139 cri.go:89] found id: ""
	I0416 01:01:13.224215   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.224222   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:13.224228   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:13.224287   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:13.261441   62139 cri.go:89] found id: ""
	I0416 01:01:13.261469   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.261476   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:13.261481   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:13.261545   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:13.296602   62139 cri.go:89] found id: ""
	I0416 01:01:13.296636   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.296647   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:13.296654   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:13.296720   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:13.333944   62139 cri.go:89] found id: ""
	I0416 01:01:13.333968   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.333977   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:13.333984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:13.334049   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:13.372919   62139 cri.go:89] found id: ""
	I0416 01:01:13.372944   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.372957   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:13.372965   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:13.373022   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:13.413257   62139 cri.go:89] found id: ""
	I0416 01:01:13.413287   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.413299   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:13.413306   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:13.413373   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:13.451705   62139 cri.go:89] found id: ""
	I0416 01:01:13.451737   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.451748   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:13.451755   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:13.451836   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:13.492549   62139 cri.go:89] found id: ""
	I0416 01:01:13.492576   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.492586   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:13.492597   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:13.492613   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:13.547267   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:13.547303   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:13.568975   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:13.569002   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:13.674444   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:13.674469   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:13.674482   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:13.745111   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:13.745145   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:16.286955   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:16.301151   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:16.301257   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:16.337516   62139 cri.go:89] found id: ""
	I0416 01:01:16.337544   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.337554   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:16.337561   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:16.337623   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:16.372674   62139 cri.go:89] found id: ""
	I0416 01:01:16.372702   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.372712   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:16.372720   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:16.372783   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:16.411181   62139 cri.go:89] found id: ""
	I0416 01:01:16.411208   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.411224   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:16.411230   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:16.411283   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:16.449063   62139 cri.go:89] found id: ""
	I0416 01:01:16.449102   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.449109   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:16.449114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:16.449183   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:16.491877   62139 cri.go:89] found id: ""
	I0416 01:01:16.491909   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.491918   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:16.491924   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:16.491981   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:16.532522   62139 cri.go:89] found id: ""
	I0416 01:01:16.532553   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.532564   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:16.532572   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:16.532633   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:16.572194   62139 cri.go:89] found id: ""
	I0416 01:01:16.572222   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.572233   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:16.572240   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:16.572302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:16.614671   62139 cri.go:89] found id: ""
	I0416 01:01:16.614697   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.614704   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:16.614712   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:16.614726   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:16.632146   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:16.632179   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:16.707597   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:16.707621   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:16.707633   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:16.783604   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:16.783640   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:16.828937   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:16.828977   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:19.385008   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:19.400949   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:19.401035   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:19.463792   62139 cri.go:89] found id: ""
	I0416 01:01:19.463825   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.463836   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:19.463843   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:19.463910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:19.523289   62139 cri.go:89] found id: ""
	I0416 01:01:19.523322   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.523332   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:19.523340   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:19.523392   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:19.558891   62139 cri.go:89] found id: ""
	I0416 01:01:19.558928   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.558939   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:19.558946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:19.559009   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:19.597876   62139 cri.go:89] found id: ""
	I0416 01:01:19.597905   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.597917   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:19.597925   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:19.597980   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:19.637536   62139 cri.go:89] found id: ""
	I0416 01:01:19.637563   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.637571   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:19.637576   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:19.637623   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:19.674414   62139 cri.go:89] found id: ""
	I0416 01:01:19.674447   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.674458   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:19.674465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:19.674525   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:19.709717   62139 cri.go:89] found id: ""
	I0416 01:01:19.709751   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.709761   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:19.709769   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:19.709837   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:19.747458   62139 cri.go:89] found id: ""
	I0416 01:01:19.747482   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.747489   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:19.747505   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:19.747523   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:19.834811   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:19.834846   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:19.876398   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:19.876428   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:19.931596   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:19.931632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:19.947074   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:19.947103   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:20.023434   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:22.524036   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:22.539399   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:22.539488   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:22.574696   62139 cri.go:89] found id: ""
	I0416 01:01:22.574723   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.574733   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:22.574741   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:22.574805   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:22.617474   62139 cri.go:89] found id: ""
	I0416 01:01:22.617503   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.617514   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:22.617521   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:22.617579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:22.657744   62139 cri.go:89] found id: ""
	I0416 01:01:22.657773   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.657781   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:22.657786   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:22.657842   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:22.695513   62139 cri.go:89] found id: ""
	I0416 01:01:22.695544   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.695552   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:22.695557   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:22.695606   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:22.732943   62139 cri.go:89] found id: ""
	I0416 01:01:22.732973   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.732983   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:22.732990   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:22.733051   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:22.768735   62139 cri.go:89] found id: ""
	I0416 01:01:22.768767   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.768775   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:22.768782   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:22.768842   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:22.804330   62139 cri.go:89] found id: ""
	I0416 01:01:22.804352   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.804361   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:22.804367   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:22.804425   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:22.842165   62139 cri.go:89] found id: ""
	I0416 01:01:22.842192   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.842199   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:22.842207   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:22.842219   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:22.921859   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:22.921880   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:22.921893   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:23.003432   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:23.003468   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:23.045446   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:23.045476   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:23.097327   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:23.097358   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:25.612297   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:25.627489   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:25.627565   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:25.664040   62139 cri.go:89] found id: ""
	I0416 01:01:25.664072   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.664083   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:25.664091   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:25.664149   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:25.701004   62139 cri.go:89] found id: ""
	I0416 01:01:25.701029   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.701036   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:25.701042   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:25.701087   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:25.740108   62139 cri.go:89] found id: ""
	I0416 01:01:25.740136   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.740144   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:25.740150   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:25.740194   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:25.778413   62139 cri.go:89] found id: ""
	I0416 01:01:25.778447   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.778458   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:25.778465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:25.778530   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:25.815188   62139 cri.go:89] found id: ""
	I0416 01:01:25.815215   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.815223   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:25.815230   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:25.815277   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:25.856370   62139 cri.go:89] found id: ""
	I0416 01:01:25.856402   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.856410   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:25.856416   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:25.856476   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:25.895363   62139 cri.go:89] found id: ""
	I0416 01:01:25.895388   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.895396   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:25.895402   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:25.895455   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:25.931854   62139 cri.go:89] found id: ""
	I0416 01:01:25.931881   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.931889   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:25.931897   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:25.931923   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:26.008395   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:26.008419   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:26.008436   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:26.087946   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:26.087983   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:26.134693   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:26.134725   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:26.189618   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:26.189652   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:28.705010   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:28.719575   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:28.719644   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:28.759011   62139 cri.go:89] found id: ""
	I0416 01:01:28.759037   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.759044   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:28.759050   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:28.759112   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:28.794640   62139 cri.go:89] found id: ""
	I0416 01:01:28.794675   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.794687   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:28.794695   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:28.794807   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:28.835634   62139 cri.go:89] found id: ""
	I0416 01:01:28.835663   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.835674   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:28.835681   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:28.835747   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:28.875384   62139 cri.go:89] found id: ""
	I0416 01:01:28.875408   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.875426   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:28.875433   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:28.875484   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:28.921202   62139 cri.go:89] found id: ""
	I0416 01:01:28.921234   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.921244   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:28.921252   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:28.921314   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:28.958791   62139 cri.go:89] found id: ""
	I0416 01:01:28.958820   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.958828   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:28.958834   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:28.958923   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:28.996136   62139 cri.go:89] found id: ""
	I0416 01:01:28.996168   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.996179   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:28.996185   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:28.996259   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:29.033912   62139 cri.go:89] found id: ""
	I0416 01:01:29.033939   62139 logs.go:276] 0 containers: []
	W0416 01:01:29.033946   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:29.033954   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:29.033969   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:29.114162   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:29.114209   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:29.153934   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:29.153965   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:29.207548   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:29.207584   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:29.222158   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:29.222184   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:29.297414   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:31.798026   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:31.812740   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:31.812815   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:31.855058   62139 cri.go:89] found id: ""
	I0416 01:01:31.855087   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.855098   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:31.855105   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:31.855172   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:31.897128   62139 cri.go:89] found id: ""
	I0416 01:01:31.897170   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.897192   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:31.897200   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:31.897259   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:31.934497   62139 cri.go:89] found id: ""
	I0416 01:01:31.934520   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.934532   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:31.934541   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:31.934588   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:31.974020   62139 cri.go:89] found id: ""
	I0416 01:01:31.974051   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.974062   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:31.974093   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:31.974163   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:32.015433   62139 cri.go:89] found id: ""
	I0416 01:01:32.015460   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.015471   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:32.015477   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:32.015540   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:32.058286   62139 cri.go:89] found id: ""
	I0416 01:01:32.058336   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.058345   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:32.058351   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:32.058408   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:32.100331   62139 cri.go:89] found id: ""
	I0416 01:01:32.102041   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.102054   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:32.102061   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:32.102115   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:32.141420   62139 cri.go:89] found id: ""
	I0416 01:01:32.141446   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.141454   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:32.141462   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:32.141473   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:32.195323   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:32.195364   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:32.210180   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:32.210206   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:32.282548   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:32.282570   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:32.282585   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:32.360627   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:32.360663   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:34.901239   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:34.917097   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:34.917205   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:34.959297   62139 cri.go:89] found id: ""
	I0416 01:01:34.959327   62139 logs.go:276] 0 containers: []
	W0416 01:01:34.959337   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:34.959344   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:34.959422   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:35.000927   62139 cri.go:89] found id: ""
	I0416 01:01:35.000974   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.000984   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:35.001000   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:35.001064   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:35.038049   62139 cri.go:89] found id: ""
	I0416 01:01:35.038073   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.038082   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:35.038090   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:35.038143   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:35.075396   62139 cri.go:89] found id: ""
	I0416 01:01:35.075467   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.075481   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:35.075490   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:35.075591   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:35.114297   62139 cri.go:89] found id: ""
	I0416 01:01:35.114325   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.114335   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:35.114343   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:35.114405   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:35.152075   62139 cri.go:89] found id: ""
	I0416 01:01:35.152099   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.152106   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:35.152112   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:35.152161   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:35.187945   62139 cri.go:89] found id: ""
	I0416 01:01:35.187974   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.187984   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:35.187991   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:35.188057   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:35.225225   62139 cri.go:89] found id: ""
	I0416 01:01:35.225253   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.225262   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:35.225272   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:35.225287   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:35.279584   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:35.279628   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:35.293416   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:35.293456   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:35.370122   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:35.370147   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:35.370159   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:35.451482   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:35.451517   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:37.994358   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:38.008209   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:38.008277   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:38.047905   62139 cri.go:89] found id: ""
	I0416 01:01:38.047943   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.047955   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:38.047962   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:38.048016   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:38.085749   62139 cri.go:89] found id: ""
	I0416 01:01:38.085780   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.085790   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:38.085797   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:38.085864   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:38.122396   62139 cri.go:89] found id: ""
	I0416 01:01:38.122419   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.122427   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:38.122432   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:38.122479   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:38.159284   62139 cri.go:89] found id: ""
	I0416 01:01:38.159313   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.159322   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:38.159329   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:38.159390   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:38.193245   62139 cri.go:89] found id: ""
	I0416 01:01:38.193280   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.193291   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:38.193298   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:38.193362   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:38.229147   62139 cri.go:89] found id: ""
	I0416 01:01:38.229179   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.229188   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:38.229194   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:38.229251   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:38.267285   62139 cri.go:89] found id: ""
	I0416 01:01:38.267309   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.267317   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:38.267321   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:38.267389   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:38.305181   62139 cri.go:89] found id: ""
	I0416 01:01:38.305207   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.305215   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:38.305222   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:38.305237   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:38.321714   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:38.321742   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:38.398352   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:38.398372   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:38.398382   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:38.474095   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:38.474129   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:38.520540   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:38.520581   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:41.072083   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:41.086767   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:41.086860   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:41.125119   62139 cri.go:89] found id: ""
	I0416 01:01:41.125149   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.125175   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:41.125182   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:41.125253   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:41.159885   62139 cri.go:89] found id: ""
	I0416 01:01:41.159915   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.159925   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:41.159931   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:41.160012   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:41.196334   62139 cri.go:89] found id: ""
	I0416 01:01:41.196366   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.196377   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:41.196385   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:41.196447   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:41.234254   62139 cri.go:89] found id: ""
	I0416 01:01:41.234282   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.234300   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:41.234319   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:41.234413   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:41.271499   62139 cri.go:89] found id: ""
	I0416 01:01:41.271523   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.271531   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:41.271536   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:41.271604   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:41.311064   62139 cri.go:89] found id: ""
	I0416 01:01:41.311096   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.311107   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:41.311114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:41.311179   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:41.349012   62139 cri.go:89] found id: ""
	I0416 01:01:41.349043   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.349053   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:41.349060   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:41.349117   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:41.385258   62139 cri.go:89] found id: ""
	I0416 01:01:41.385298   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.385305   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:41.385315   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:41.385330   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:41.470086   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:41.470130   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:41.513835   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:41.513870   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:41.565980   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:41.566013   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:41.582647   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:41.582678   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:41.658928   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:44.159107   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:44.173015   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:44.173088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:44.214310   62139 cri.go:89] found id: ""
	I0416 01:01:44.214345   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.214363   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:44.214374   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:44.214462   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:44.256476   62139 cri.go:89] found id: ""
	I0416 01:01:44.256503   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.256511   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:44.256516   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:44.256577   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:44.298047   62139 cri.go:89] found id: ""
	I0416 01:01:44.298079   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.298089   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:44.298097   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:44.298158   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:44.339165   62139 cri.go:89] found id: ""
	I0416 01:01:44.339196   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.339206   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:44.339213   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:44.339280   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:44.378078   62139 cri.go:89] found id: ""
	I0416 01:01:44.378108   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.378116   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:44.378122   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:44.378170   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:44.421494   62139 cri.go:89] found id: ""
	I0416 01:01:44.421525   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.421536   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:44.421543   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:44.421609   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:44.459919   62139 cri.go:89] found id: ""
	I0416 01:01:44.459948   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.459958   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:44.459965   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:44.460025   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:44.499448   62139 cri.go:89] found id: ""
	I0416 01:01:44.499479   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.499489   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:44.499500   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:44.499516   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:44.555122   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:44.555159   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:44.572048   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:44.572075   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:44.646252   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:44.646283   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:44.646299   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:44.730593   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:44.730620   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:47.276658   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:47.291354   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:47.291431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:47.334998   62139 cri.go:89] found id: ""
	I0416 01:01:47.335036   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.335055   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:47.335062   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:47.335121   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:47.376546   62139 cri.go:89] found id: ""
	I0416 01:01:47.376575   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.376582   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:47.376587   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:47.376647   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:47.418609   62139 cri.go:89] found id: ""
	I0416 01:01:47.418642   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.418654   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:47.418661   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:47.418721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:47.459432   62139 cri.go:89] found id: ""
	I0416 01:01:47.459458   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.459465   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:47.459470   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:47.459518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:47.497776   62139 cri.go:89] found id: ""
	I0416 01:01:47.497800   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.497808   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:47.497813   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:47.497866   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:47.536803   62139 cri.go:89] found id: ""
	I0416 01:01:47.536835   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.536842   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:47.536849   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:47.536916   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:47.575883   62139 cri.go:89] found id: ""
	I0416 01:01:47.575916   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.575923   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:47.575931   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:47.575976   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:47.627676   62139 cri.go:89] found id: ""
	I0416 01:01:47.627697   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.627703   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:47.627711   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:47.627725   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:47.669714   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:47.669745   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:47.721349   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:47.721389   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:47.735833   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:47.735859   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:47.806890   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:47.806913   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:47.806925   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:50.386960   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:50.400832   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:50.400901   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:50.443042   62139 cri.go:89] found id: ""
	I0416 01:01:50.443076   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.443086   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:50.443094   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:50.443157   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:50.480495   62139 cri.go:89] found id: ""
	I0416 01:01:50.480526   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.480536   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:50.480544   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:50.480602   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:50.516578   62139 cri.go:89] found id: ""
	I0416 01:01:50.516605   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.516613   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:50.516618   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:50.516676   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:50.555302   62139 cri.go:89] found id: ""
	I0416 01:01:50.555330   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.555337   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:50.555344   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:50.555388   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:50.594647   62139 cri.go:89] found id: ""
	I0416 01:01:50.594674   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.594682   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:50.594688   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:50.594737   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:50.633401   62139 cri.go:89] found id: ""
	I0416 01:01:50.633428   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.633436   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:50.633442   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:50.633501   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:50.673714   62139 cri.go:89] found id: ""
	I0416 01:01:50.673744   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.673755   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:50.673763   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:50.673811   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:50.710103   62139 cri.go:89] found id: ""
	I0416 01:01:50.710127   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.710134   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:50.710142   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:50.710153   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:50.765121   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:50.765168   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:50.780407   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:50.780436   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:50.855602   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:50.855635   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:50.855663   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:50.937249   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:50.937283   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:53.481261   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:53.495872   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:53.495931   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:53.532710   62139 cri.go:89] found id: ""
	I0416 01:01:53.532738   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.532748   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:53.532756   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:53.532815   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:53.568734   62139 cri.go:89] found id: ""
	I0416 01:01:53.568763   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.568770   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:53.568776   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:53.568841   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:53.608937   62139 cri.go:89] found id: ""
	I0416 01:01:53.608965   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.608976   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:53.608984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:53.609042   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:53.646538   62139 cri.go:89] found id: ""
	I0416 01:01:53.646573   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.646585   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:53.646592   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:53.646657   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:53.687761   62139 cri.go:89] found id: ""
	I0416 01:01:53.687792   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.687801   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:53.687809   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:53.687872   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:53.726126   62139 cri.go:89] found id: ""
	I0416 01:01:53.726161   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.726169   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:53.726174   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:53.726224   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:53.762583   62139 cri.go:89] found id: ""
	I0416 01:01:53.762609   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.762618   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:53.762625   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:53.762695   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:53.803685   62139 cri.go:89] found id: ""
	I0416 01:01:53.803715   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.803726   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:53.803737   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:53.803751   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:53.862215   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:53.862255   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:53.877713   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:53.877743   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:53.953394   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:53.953422   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:53.953438   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:54.044657   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:54.044698   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:56.602100   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:56.616548   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:56.616632   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:56.653765   62139 cri.go:89] found id: ""
	I0416 01:01:56.653794   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.653810   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:56.653817   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:56.653879   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:56.691394   62139 cri.go:89] found id: ""
	I0416 01:01:56.691416   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.691422   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:56.691428   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:56.691475   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:56.728995   62139 cri.go:89] found id: ""
	I0416 01:01:56.729017   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.729024   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:56.729029   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:56.729078   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:56.769119   62139 cri.go:89] found id: ""
	I0416 01:01:56.769184   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.769196   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:56.769204   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:56.769270   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:56.810562   62139 cri.go:89] found id: ""
	I0416 01:01:56.810589   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.810597   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:56.810608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:56.810669   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:56.849367   62139 cri.go:89] found id: ""
	I0416 01:01:56.849392   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.849399   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:56.849405   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:56.849464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:56.887330   62139 cri.go:89] found id: ""
	I0416 01:01:56.887359   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.887370   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:56.887378   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:56.887461   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:56.926636   62139 cri.go:89] found id: ""
	I0416 01:01:56.926664   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.926672   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:56.926682   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:56.926697   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:56.981836   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:56.981875   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:56.996385   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:56.996411   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:57.071026   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:57.071054   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:57.071070   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:57.155430   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:57.155466   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:59.701547   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:59.714465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:59.714526   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:59.759791   62139 cri.go:89] found id: ""
	I0416 01:01:59.759830   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.759841   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:59.759849   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:59.759914   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:59.813303   62139 cri.go:89] found id: ""
	I0416 01:01:59.813334   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.813343   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:59.813353   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:59.813406   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:59.872291   62139 cri.go:89] found id: ""
	I0416 01:01:59.872328   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.872338   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:59.872347   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:59.872423   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:59.910397   62139 cri.go:89] found id: ""
	I0416 01:01:59.910425   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.910437   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:59.910444   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:59.910512   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:59.953656   62139 cri.go:89] found id: ""
	I0416 01:01:59.953685   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.953695   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:59.953703   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:59.953779   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:59.993193   62139 cri.go:89] found id: ""
	I0416 01:01:59.993220   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.993229   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:59.993239   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:59.993298   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:00.030205   62139 cri.go:89] found id: ""
	I0416 01:02:00.030229   62139 logs.go:276] 0 containers: []
	W0416 01:02:00.030237   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:00.030242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:00.030302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:00.068160   62139 cri.go:89] found id: ""
	I0416 01:02:00.068189   62139 logs.go:276] 0 containers: []
	W0416 01:02:00.068199   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:00.068211   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:00.068226   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:00.149383   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:00.149416   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:00.188000   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:00.188025   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:00.240522   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:00.240550   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:00.254189   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:00.254215   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:00.331483   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:02.832656   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:02.846826   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:02.846907   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:02.883397   62139 cri.go:89] found id: ""
	I0416 01:02:02.883428   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.883439   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:02.883446   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:02.883499   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:02.923686   62139 cri.go:89] found id: ""
	I0416 01:02:02.923708   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.923715   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:02.923719   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:02.923770   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:02.964155   62139 cri.go:89] found id: ""
	I0416 01:02:02.964180   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.964188   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:02.964193   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:02.964247   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:03.005357   62139 cri.go:89] found id: ""
	I0416 01:02:03.005386   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.005396   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:03.005403   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:03.005464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:03.047221   62139 cri.go:89] found id: ""
	I0416 01:02:03.047246   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.047257   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:03.047264   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:03.047326   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:03.088737   62139 cri.go:89] found id: ""
	I0416 01:02:03.088767   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.088776   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:03.088784   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:03.088846   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:03.129756   62139 cri.go:89] found id: ""
	I0416 01:02:03.129778   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.129785   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:03.129790   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:03.129837   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:03.169422   62139 cri.go:89] found id: ""
	I0416 01:02:03.169447   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.169459   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:03.169468   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:03.169478   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:03.246485   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:03.246503   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:03.246514   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:03.326498   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:03.326533   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:03.372788   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:03.372817   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:03.428561   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:03.428603   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:05.944274   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:05.957744   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:05.957813   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:05.993348   62139 cri.go:89] found id: ""
	I0416 01:02:05.993400   62139 logs.go:276] 0 containers: []
	W0416 01:02:05.993411   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:05.993430   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:05.993497   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:06.034811   62139 cri.go:89] found id: ""
	I0416 01:02:06.034848   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.034859   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:06.034866   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:06.034953   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:06.079047   62139 cri.go:89] found id: ""
	I0416 01:02:06.079070   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.079078   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:06.079082   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:06.079127   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:06.122494   62139 cri.go:89] found id: ""
	I0416 01:02:06.122513   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.122520   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:06.122525   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:06.122589   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:06.163436   62139 cri.go:89] found id: ""
	I0416 01:02:06.163461   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.163468   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:06.163473   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:06.163534   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:06.205036   62139 cri.go:89] found id: ""
	I0416 01:02:06.205064   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.205072   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:06.205077   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:06.205134   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:06.242056   62139 cri.go:89] found id: ""
	I0416 01:02:06.242084   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.242094   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:06.242107   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:06.242166   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:06.278604   62139 cri.go:89] found id: ""
	I0416 01:02:06.278636   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.278646   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:06.278656   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:06.278671   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:06.334631   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:06.334658   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:06.348199   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:06.348227   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:06.424774   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:06.424793   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:06.424804   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:06.503509   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:06.503542   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:09.046665   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:09.061072   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:09.061173   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:09.097482   62139 cri.go:89] found id: ""
	I0416 01:02:09.097514   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.097524   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:09.097543   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:09.097613   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:09.135124   62139 cri.go:89] found id: ""
	I0416 01:02:09.135157   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.135168   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:09.135175   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:09.135236   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:09.173887   62139 cri.go:89] found id: ""
	I0416 01:02:09.173912   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.173920   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:09.173925   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:09.173983   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:09.209658   62139 cri.go:89] found id: ""
	I0416 01:02:09.209683   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.209691   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:09.209702   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:09.209763   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:09.249149   62139 cri.go:89] found id: ""
	I0416 01:02:09.249200   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.249209   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:09.249214   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:09.249292   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:09.291447   62139 cri.go:89] found id: ""
	I0416 01:02:09.291477   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.291487   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:09.291494   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:09.291553   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:09.329248   62139 cri.go:89] found id: ""
	I0416 01:02:09.329271   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.329281   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:09.329288   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:09.329345   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:09.365585   62139 cri.go:89] found id: ""
	I0416 01:02:09.365613   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.365622   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:09.365632   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:09.365645   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:09.418998   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:09.419031   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:09.433531   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:09.433558   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:09.508543   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:09.508573   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:09.508588   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:09.593889   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:09.593930   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:12.139020   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:12.154268   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:12.154349   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:12.192717   62139 cri.go:89] found id: ""
	I0416 01:02:12.192746   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.192758   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:12.192765   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:12.192832   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:12.230633   62139 cri.go:89] found id: ""
	I0416 01:02:12.230662   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.230674   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:12.230681   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:12.230729   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:12.271108   62139 cri.go:89] found id: ""
	I0416 01:02:12.271150   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.271161   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:12.271168   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:12.271233   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:12.310161   62139 cri.go:89] found id: ""
	I0416 01:02:12.310186   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.310194   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:12.310201   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:12.310272   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:12.349638   62139 cri.go:89] found id: ""
	I0416 01:02:12.349668   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.349678   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:12.349686   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:12.349766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:12.391565   62139 cri.go:89] found id: ""
	I0416 01:02:12.391597   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.391607   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:12.391620   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:12.391681   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:12.429142   62139 cri.go:89] found id: ""
	I0416 01:02:12.429186   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.429195   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:12.429200   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:12.429249   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:12.466209   62139 cri.go:89] found id: ""
	I0416 01:02:12.466238   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.466249   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:12.466260   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:12.466277   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:12.551333   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:12.551355   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:12.551367   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:12.634465   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:12.634496   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:12.675198   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:12.675231   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:12.728933   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:12.728962   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:15.243521   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:15.258589   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:15.258657   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:15.301901   62139 cri.go:89] found id: ""
	I0416 01:02:15.301931   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.301943   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:15.301951   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:15.302006   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:15.345932   62139 cri.go:89] found id: ""
	I0416 01:02:15.346011   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.346032   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:15.346043   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:15.346113   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:15.387957   62139 cri.go:89] found id: ""
	I0416 01:02:15.387983   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.387991   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:15.387996   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:15.388044   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:15.424887   62139 cri.go:89] found id: ""
	I0416 01:02:15.424916   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.424927   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:15.424934   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:15.424996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:15.460088   62139 cri.go:89] found id: ""
	I0416 01:02:15.460113   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.460120   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:15.460125   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:15.460172   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:15.495567   62139 cri.go:89] found id: ""
	I0416 01:02:15.495597   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.495607   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:15.495615   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:15.495692   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:15.533901   62139 cri.go:89] found id: ""
	I0416 01:02:15.533931   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.533940   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:15.533946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:15.533996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:15.576665   62139 cri.go:89] found id: ""
	I0416 01:02:15.576692   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.576702   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:15.576712   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:15.576728   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:15.626933   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:15.626961   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:15.681627   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:15.681656   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:15.695572   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:15.695608   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:15.768910   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:15.768934   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:15.768945   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:18.349776   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:18.363499   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:18.363568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:18.404210   62139 cri.go:89] found id: ""
	I0416 01:02:18.404234   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.404241   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:18.404246   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:18.404304   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:18.444610   62139 cri.go:89] found id: ""
	I0416 01:02:18.444641   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.444651   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:18.444658   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:18.444722   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:18.483134   62139 cri.go:89] found id: ""
	I0416 01:02:18.483160   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.483168   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:18.483173   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:18.483220   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:18.522120   62139 cri.go:89] found id: ""
	I0416 01:02:18.522144   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.522156   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:18.522161   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:18.522205   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:18.566293   62139 cri.go:89] found id: ""
	I0416 01:02:18.566319   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.566327   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:18.566332   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:18.566391   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:18.604000   62139 cri.go:89] found id: ""
	I0416 01:02:18.604028   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.604036   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:18.604042   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:18.604089   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:18.641967   62139 cri.go:89] found id: ""
	I0416 01:02:18.641999   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.642009   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:18.642016   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:18.642080   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:18.683494   62139 cri.go:89] found id: ""
	I0416 01:02:18.683533   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.683544   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:18.683555   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:18.683570   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:18.761674   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:18.761699   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:18.761714   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:18.849959   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:18.849995   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:18.895534   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:18.895570   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:18.949287   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:18.949320   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:21.464393   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:21.479019   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:21.479087   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:21.516262   62139 cri.go:89] found id: ""
	I0416 01:02:21.516303   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.516313   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:21.516323   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:21.516385   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:21.554279   62139 cri.go:89] found id: ""
	I0416 01:02:21.554315   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.554327   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:21.554334   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:21.554393   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:21.590889   62139 cri.go:89] found id: ""
	I0416 01:02:21.590918   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.590928   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:21.590935   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:21.590996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:21.629925   62139 cri.go:89] found id: ""
	I0416 01:02:21.629955   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.629965   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:21.629972   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:21.630032   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:21.667947   62139 cri.go:89] found id: ""
	I0416 01:02:21.667975   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.667983   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:21.667988   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:21.668045   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:21.706275   62139 cri.go:89] found id: ""
	I0416 01:02:21.706308   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.706318   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:21.706326   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:21.706392   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:21.748077   62139 cri.go:89] found id: ""
	I0416 01:02:21.748106   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.748117   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:21.748123   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:21.748170   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:21.785441   62139 cri.go:89] found id: ""
	I0416 01:02:21.785467   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.785477   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:21.785488   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:21.785510   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:21.824702   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:21.824735   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:21.882780   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:21.882810   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:21.897211   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:21.897236   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:21.971882   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:21.971903   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:21.971915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:24.550749   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:24.564951   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:24.565024   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:24.605025   62139 cri.go:89] found id: ""
	I0416 01:02:24.605055   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.605063   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:24.605068   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:24.605142   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:24.640727   62139 cri.go:89] found id: ""
	I0416 01:02:24.640757   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.640764   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:24.640769   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:24.640822   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:24.678031   62139 cri.go:89] found id: ""
	I0416 01:02:24.678060   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.678068   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:24.678074   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:24.678125   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:24.714854   62139 cri.go:89] found id: ""
	I0416 01:02:24.714896   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.714907   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:24.714914   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:24.714981   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:24.752129   62139 cri.go:89] found id: ""
	I0416 01:02:24.752158   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.752168   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:24.752177   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:24.752243   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:24.788507   62139 cri.go:89] found id: ""
	I0416 01:02:24.788541   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.788551   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:24.788557   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:24.788617   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:24.828379   62139 cri.go:89] found id: ""
	I0416 01:02:24.828409   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.828419   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:24.828427   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:24.828486   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:24.865676   62139 cri.go:89] found id: ""
	I0416 01:02:24.865707   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.865717   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:24.865725   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:24.865736   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:24.941057   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:24.941079   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:24.941091   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:25.025937   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:25.025979   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:25.065828   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:25.065871   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:25.128004   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:25.128039   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:27.643201   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:27.658601   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:27.658660   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:27.700627   62139 cri.go:89] found id: ""
	I0416 01:02:27.700650   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.700657   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:27.700662   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:27.700718   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:27.734929   62139 cri.go:89] found id: ""
	I0416 01:02:27.734957   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.734966   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:27.734975   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:27.735046   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:27.772412   62139 cri.go:89] found id: ""
	I0416 01:02:27.772440   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.772448   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:27.772454   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:27.772514   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:27.809436   62139 cri.go:89] found id: ""
	I0416 01:02:27.809459   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.809466   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:27.809471   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:27.809518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:27.845717   62139 cri.go:89] found id: ""
	I0416 01:02:27.845746   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.845756   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:27.845764   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:27.845825   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:27.887224   62139 cri.go:89] found id: ""
	I0416 01:02:27.887250   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.887260   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:27.887267   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:27.887334   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:27.920945   62139 cri.go:89] found id: ""
	I0416 01:02:27.920974   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.920984   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:27.920992   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:27.921066   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:27.960933   62139 cri.go:89] found id: ""
	I0416 01:02:27.960959   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.960966   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:27.960974   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:27.960985   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:28.013003   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:28.013033   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:28.026599   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:28.026626   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:28.117200   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:28.117226   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:28.117240   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:28.198003   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:28.198036   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:30.741379   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:30.757102   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:30.757199   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:30.798038   62139 cri.go:89] found id: ""
	I0416 01:02:30.798068   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.798075   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:30.798080   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:30.798137   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:30.844840   62139 cri.go:89] found id: ""
	I0416 01:02:30.844862   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.844871   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:30.844877   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:30.844944   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:30.883816   62139 cri.go:89] found id: ""
	I0416 01:02:30.883841   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.883849   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:30.883855   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:30.883903   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:30.919353   62139 cri.go:89] found id: ""
	I0416 01:02:30.919380   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.919389   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:30.919396   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:30.919457   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:30.957036   62139 cri.go:89] found id: ""
	I0416 01:02:30.957061   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.957069   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:30.957084   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:30.957143   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:30.993179   62139 cri.go:89] found id: ""
	I0416 01:02:30.993211   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.993220   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:30.993228   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:30.993315   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:31.032634   62139 cri.go:89] found id: ""
	I0416 01:02:31.032661   62139 logs.go:276] 0 containers: []
	W0416 01:02:31.032670   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:31.032684   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:31.032753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:31.069345   62139 cri.go:89] found id: ""
	I0416 01:02:31.069373   62139 logs.go:276] 0 containers: []
	W0416 01:02:31.069382   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:31.069392   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:31.069408   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:31.123989   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:31.124017   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:31.140998   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:31.141032   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:31.217496   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:31.218063   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:31.218098   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:31.296811   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:31.296858   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:33.842516   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:33.872440   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:33.872518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:33.909287   62139 cri.go:89] found id: ""
	I0416 01:02:33.909314   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.909324   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:33.909329   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:33.909388   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:33.947531   62139 cri.go:89] found id: ""
	I0416 01:02:33.947566   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.947576   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:33.947584   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:33.947642   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:33.990084   62139 cri.go:89] found id: ""
	I0416 01:02:33.990118   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.990129   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:33.990136   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:33.990200   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:34.024121   62139 cri.go:89] found id: ""
	I0416 01:02:34.024151   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.024159   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:34.024165   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:34.024218   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:34.061075   62139 cri.go:89] found id: ""
	I0416 01:02:34.061104   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.061111   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:34.061116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:34.061179   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:34.097887   62139 cri.go:89] found id: ""
	I0416 01:02:34.097928   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.097938   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:34.097946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:34.098007   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:34.135541   62139 cri.go:89] found id: ""
	I0416 01:02:34.135567   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.135577   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:34.135585   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:34.135637   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:34.170884   62139 cri.go:89] found id: ""
	I0416 01:02:34.170910   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.170920   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:34.170931   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:34.170946   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:34.223465   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:34.223494   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:34.238898   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:34.238929   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:34.316916   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:34.316946   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:34.316962   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:34.401564   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:34.401600   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:36.945789   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:36.959707   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:36.959774   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:36.994463   62139 cri.go:89] found id: ""
	I0416 01:02:36.994497   62139 logs.go:276] 0 containers: []
	W0416 01:02:36.994508   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:36.994515   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:36.994579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:37.028847   62139 cri.go:89] found id: ""
	I0416 01:02:37.028877   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.028887   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:37.028893   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:37.028954   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:37.061841   62139 cri.go:89] found id: ""
	I0416 01:02:37.061872   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.061882   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:37.061889   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:37.061954   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:37.098460   62139 cri.go:89] found id: ""
	I0416 01:02:37.098485   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.098495   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:37.098502   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:37.098569   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:37.133016   62139 cri.go:89] found id: ""
	I0416 01:02:37.133044   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.133053   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:37.133059   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:37.133122   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:37.170252   62139 cri.go:89] found id: ""
	I0416 01:02:37.170276   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.170286   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:37.170293   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:37.170354   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:37.206114   62139 cri.go:89] found id: ""
	I0416 01:02:37.206141   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.206148   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:37.206153   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:37.206208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:37.241353   62139 cri.go:89] found id: ""
	I0416 01:02:37.241383   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.241395   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:37.241405   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:37.241429   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:37.293452   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:37.293483   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:37.309885   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:37.309926   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:37.385455   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:37.385481   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:37.385496   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:37.463064   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:37.463101   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:40.008717   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:40.022249   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:40.022327   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:40.064444   62139 cri.go:89] found id: ""
	I0416 01:02:40.064479   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.064490   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:40.064497   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:40.064545   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:40.100326   62139 cri.go:89] found id: ""
	I0416 01:02:40.100353   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.100361   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:40.100366   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:40.100413   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:40.138818   62139 cri.go:89] found id: ""
	I0416 01:02:40.138857   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.138869   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:40.138878   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:40.138928   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:40.184203   62139 cri.go:89] found id: ""
	I0416 01:02:40.184234   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.184244   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:40.184252   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:40.184311   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:40.221968   62139 cri.go:89] found id: ""
	I0416 01:02:40.221991   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.221998   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:40.222007   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:40.222088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:40.265621   62139 cri.go:89] found id: ""
	I0416 01:02:40.265643   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.265650   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:40.265657   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:40.265723   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:40.314121   62139 cri.go:89] found id: ""
	I0416 01:02:40.314152   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.314163   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:40.314170   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:40.314229   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:40.359788   62139 cri.go:89] found id: ""
	I0416 01:02:40.359825   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.359836   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:40.359849   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:40.359863   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:40.431678   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:40.431718   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:40.449847   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:40.449877   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:40.524271   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:40.524297   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:40.524309   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:40.601398   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:40.601433   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:43.145431   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:43.160269   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:43.160338   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:43.196603   62139 cri.go:89] found id: ""
	I0416 01:02:43.196637   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.196648   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:43.196655   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:43.196716   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:43.235863   62139 cri.go:89] found id: ""
	I0416 01:02:43.235893   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.235905   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:43.235911   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:43.235971   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:43.271408   62139 cri.go:89] found id: ""
	I0416 01:02:43.271437   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.271444   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:43.271450   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:43.271512   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:43.310931   62139 cri.go:89] found id: ""
	I0416 01:02:43.310958   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.310965   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:43.310971   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:43.311032   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:43.347472   62139 cri.go:89] found id: ""
	I0416 01:02:43.347502   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.347512   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:43.347520   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:43.347581   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:43.387326   62139 cri.go:89] found id: ""
	I0416 01:02:43.387361   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.387372   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:43.387429   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:43.387506   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:43.425099   62139 cri.go:89] found id: ""
	I0416 01:02:43.425122   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.425130   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:43.425141   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:43.425208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:43.461364   62139 cri.go:89] found id: ""
	I0416 01:02:43.461397   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.461408   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:43.461419   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:43.461434   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:43.514520   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:43.514556   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:43.528740   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:43.528777   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:43.599010   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:43.599035   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:43.599051   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:43.682913   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:43.682959   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:46.231398   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:46.260247   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:46.260338   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:46.304498   62139 cri.go:89] found id: ""
	I0416 01:02:46.304521   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.304528   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:46.304534   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:46.304600   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:46.364055   62139 cri.go:89] found id: ""
	I0416 01:02:46.364081   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.364090   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:46.364098   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:46.364167   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:46.412395   62139 cri.go:89] found id: ""
	I0416 01:02:46.412437   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.412475   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:46.412510   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:46.412584   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:46.453669   62139 cri.go:89] found id: ""
	I0416 01:02:46.453698   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.453709   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:46.453716   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:46.453766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:46.490667   62139 cri.go:89] found id: ""
	I0416 01:02:46.490699   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.490709   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:46.490715   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:46.490766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:46.529405   62139 cri.go:89] found id: ""
	I0416 01:02:46.529443   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.529460   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:46.529467   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:46.529527   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:46.565359   62139 cri.go:89] found id: ""
	I0416 01:02:46.565384   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.565391   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:46.565396   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:46.565451   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:46.609381   62139 cri.go:89] found id: ""
	I0416 01:02:46.609406   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.609413   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:46.609421   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:46.609432   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:46.663080   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:46.663112   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:46.677303   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:46.677338   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:46.750134   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:46.750163   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:46.750175   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:46.829395   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:46.829434   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:49.374356   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:49.390674   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:49.390753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:49.427968   62139 cri.go:89] found id: ""
	I0416 01:02:49.427993   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.428000   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:49.428005   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:49.428058   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:49.461821   62139 cri.go:89] found id: ""
	I0416 01:02:49.461850   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.461857   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:49.461863   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:49.461918   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:49.496305   62139 cri.go:89] found id: ""
	I0416 01:02:49.496356   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.496364   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:49.496369   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:49.496429   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:49.536096   62139 cri.go:89] found id: ""
	I0416 01:02:49.536122   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.536129   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:49.536134   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:49.536194   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:49.572078   62139 cri.go:89] found id: ""
	I0416 01:02:49.572106   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.572115   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:49.572122   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:49.572181   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:49.607803   62139 cri.go:89] found id: ""
	I0416 01:02:49.607835   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.607847   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:49.607861   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:49.607915   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:49.651245   62139 cri.go:89] found id: ""
	I0416 01:02:49.651272   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.651280   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:49.651285   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:49.651332   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:49.693587   62139 cri.go:89] found id: ""
	I0416 01:02:49.693612   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.693622   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:49.693632   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:49.693646   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:49.750003   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:49.750032   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:49.764447   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:49.764472   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:49.844739   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:49.844764   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:49.844780   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:49.924260   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:49.924294   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:52.467399   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:52.481656   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:52.481729   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:52.518506   62139 cri.go:89] found id: ""
	I0416 01:02:52.518531   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.518537   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:52.518544   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:52.518599   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:52.554799   62139 cri.go:89] found id: ""
	I0416 01:02:52.554820   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.554827   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:52.554832   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:52.554888   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:52.597236   62139 cri.go:89] found id: ""
	I0416 01:02:52.597265   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.597272   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:52.597278   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:52.597335   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:52.635544   62139 cri.go:89] found id: ""
	I0416 01:02:52.635567   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.635578   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:52.635585   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:52.635639   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:52.672715   62139 cri.go:89] found id: ""
	I0416 01:02:52.672739   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.672746   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:52.672751   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:52.672808   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:52.711600   62139 cri.go:89] found id: ""
	I0416 01:02:52.711631   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.711640   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:52.711648   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:52.711718   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:52.750372   62139 cri.go:89] found id: ""
	I0416 01:02:52.750405   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.750416   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:52.750423   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:52.750486   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:52.786651   62139 cri.go:89] found id: ""
	I0416 01:02:52.786678   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.786688   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:52.786698   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:52.786712   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:52.840262   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:52.840296   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:52.854734   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:52.854762   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:52.931182   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:52.931211   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:52.931226   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:53.007023   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:53.007061   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:55.548305   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:55.562483   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:55.562562   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:55.599480   62139 cri.go:89] found id: ""
	I0416 01:02:55.599504   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.599511   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:55.599517   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:55.599573   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:55.636832   62139 cri.go:89] found id: ""
	I0416 01:02:55.636862   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.636873   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:55.636879   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:55.636940   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:55.676211   62139 cri.go:89] found id: ""
	I0416 01:02:55.676240   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.676250   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:55.676256   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:55.676318   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:55.713498   62139 cri.go:89] found id: ""
	I0416 01:02:55.713527   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.713537   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:55.713544   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:55.713604   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:55.754239   62139 cri.go:89] found id: ""
	I0416 01:02:55.754276   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.754284   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:55.754301   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:55.754355   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:55.792073   62139 cri.go:89] found id: ""
	I0416 01:02:55.792106   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.792117   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:55.792125   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:55.792191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:55.829635   62139 cri.go:89] found id: ""
	I0416 01:02:55.829665   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.829676   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:55.829683   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:55.829742   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:55.876417   62139 cri.go:89] found id: ""
	I0416 01:02:55.876443   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.876450   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:55.876458   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:55.876471   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:55.926670   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:55.926707   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:55.941660   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:55.941696   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:56.018776   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:56.018806   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:56.018820   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:56.097335   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:56.097378   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:58.642188   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:58.655537   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:58.655605   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:58.692091   62139 cri.go:89] found id: ""
	I0416 01:02:58.692116   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.692124   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:58.692129   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:58.692191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:58.729434   62139 cri.go:89] found id: ""
	I0416 01:02:58.729461   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.729472   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:58.729491   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:58.729568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:58.765879   62139 cri.go:89] found id: ""
	I0416 01:02:58.765907   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.765916   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:58.765924   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:58.765987   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:58.802285   62139 cri.go:89] found id: ""
	I0416 01:02:58.802323   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.802334   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:58.802342   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:58.802399   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:58.841357   62139 cri.go:89] found id: ""
	I0416 01:02:58.841385   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.841396   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:58.841403   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:58.841464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:58.876982   62139 cri.go:89] found id: ""
	I0416 01:02:58.877022   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.877032   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:58.877040   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:58.877108   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:58.915563   62139 cri.go:89] found id: ""
	I0416 01:02:58.915596   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.915607   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:58.915614   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:58.915683   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:58.951268   62139 cri.go:89] found id: ""
	I0416 01:02:58.951303   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.951313   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:58.951324   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:58.951341   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:59.004673   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:59.004710   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:59.019393   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:59.019423   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:59.091587   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:59.091612   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:59.091632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:59.169623   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:59.169655   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:01.710597   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:01.724394   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:01.724463   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:01.761577   62139 cri.go:89] found id: ""
	I0416 01:03:01.761605   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.761616   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:01.761624   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:01.761684   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:01.797467   62139 cri.go:89] found id: ""
	I0416 01:03:01.797498   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.797508   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:01.797515   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:01.797582   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:01.839910   62139 cri.go:89] found id: ""
	I0416 01:03:01.839940   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.839950   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:01.839958   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:01.840019   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:01.879572   62139 cri.go:89] found id: ""
	I0416 01:03:01.879599   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.879611   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:01.879617   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:01.879664   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:01.920190   62139 cri.go:89] found id: ""
	I0416 01:03:01.920222   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.920234   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:01.920242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:01.920300   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:01.957389   62139 cri.go:89] found id: ""
	I0416 01:03:01.957418   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.957428   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:01.957436   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:01.957507   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:01.998730   62139 cri.go:89] found id: ""
	I0416 01:03:01.998754   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.998762   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:01.998767   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:01.998812   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:02.036062   62139 cri.go:89] found id: ""
	I0416 01:03:02.036094   62139 logs.go:276] 0 containers: []
	W0416 01:03:02.036103   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:02.036112   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:02.036125   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:02.089109   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:02.089149   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:02.103312   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:02.103342   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:02.174034   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:02.174056   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:02.174069   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:02.249526   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:02.249555   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:04.795314   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:04.808294   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:04.808367   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:04.848795   62139 cri.go:89] found id: ""
	I0416 01:03:04.848825   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.848849   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:04.848857   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:04.848928   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:04.886442   62139 cri.go:89] found id: ""
	I0416 01:03:04.886477   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.886488   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:04.886502   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:04.886572   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:04.929183   62139 cri.go:89] found id: ""
	I0416 01:03:04.929215   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.929226   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:04.929234   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:04.929297   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:04.965134   62139 cri.go:89] found id: ""
	I0416 01:03:04.965172   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.965184   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:04.965191   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:04.965247   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:05.001346   62139 cri.go:89] found id: ""
	I0416 01:03:05.001373   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.001381   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:05.001387   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:05.001434   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:05.039181   62139 cri.go:89] found id: ""
	I0416 01:03:05.039210   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.039219   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:05.039224   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:05.039289   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:05.073451   62139 cri.go:89] found id: ""
	I0416 01:03:05.073479   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.073487   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:05.073494   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:05.073555   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:05.108466   62139 cri.go:89] found id: ""
	I0416 01:03:05.108495   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.108510   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:05.108520   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:05.108537   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:05.162725   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:05.162765   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:05.178152   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:05.178183   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:05.255122   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:05.255147   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:05.255161   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:05.331274   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:05.331309   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:07.882980   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:07.896311   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:07.896372   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:07.934632   62139 cri.go:89] found id: ""
	I0416 01:03:07.934661   62139 logs.go:276] 0 containers: []
	W0416 01:03:07.934671   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:07.934677   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:07.934745   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:07.971463   62139 cri.go:89] found id: ""
	I0416 01:03:07.971495   62139 logs.go:276] 0 containers: []
	W0416 01:03:07.971511   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:07.971518   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:07.971581   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:08.006808   62139 cri.go:89] found id: ""
	I0416 01:03:08.006839   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.006847   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:08.006852   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:08.006912   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:08.043051   62139 cri.go:89] found id: ""
	I0416 01:03:08.043082   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.043089   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:08.043095   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:08.043155   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:08.078602   62139 cri.go:89] found id: ""
	I0416 01:03:08.078638   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.078647   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:08.078655   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:08.078724   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:08.115264   62139 cri.go:89] found id: ""
	I0416 01:03:08.115293   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.115303   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:08.115311   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:08.115378   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:08.152782   62139 cri.go:89] found id: ""
	I0416 01:03:08.152814   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.152821   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:08.152826   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:08.152875   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:08.193484   62139 cri.go:89] found id: ""
	I0416 01:03:08.193506   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.193513   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:08.193522   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:08.193532   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:08.248796   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:08.248831   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:08.266054   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:08.266083   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:08.343470   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:08.343501   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:08.343515   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:08.430335   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:08.430383   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:10.972540   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:10.986911   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:10.986984   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:11.024905   62139 cri.go:89] found id: ""
	I0416 01:03:11.024939   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.024951   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:11.024958   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:11.025011   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:11.058629   62139 cri.go:89] found id: ""
	I0416 01:03:11.058654   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.058662   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:11.058667   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:11.058721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:11.093277   62139 cri.go:89] found id: ""
	I0416 01:03:11.093308   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.093317   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:11.093325   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:11.093386   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:11.131883   62139 cri.go:89] found id: ""
	I0416 01:03:11.131912   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.131924   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:11.131934   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:11.132004   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:11.175142   62139 cri.go:89] found id: ""
	I0416 01:03:11.175169   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.175179   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:11.175186   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:11.175236   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:11.209985   62139 cri.go:89] found id: ""
	I0416 01:03:11.210020   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.210031   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:11.210039   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:11.210110   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:11.246086   62139 cri.go:89] found id: ""
	I0416 01:03:11.246119   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.246129   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:11.246137   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:11.246199   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:11.286979   62139 cri.go:89] found id: ""
	I0416 01:03:11.287007   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.287019   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:11.287037   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:11.287051   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:11.364522   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:11.364557   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:11.410343   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:11.410375   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:11.459671   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:11.459703   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:11.476163   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:11.476193   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:11.549544   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:14.050433   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:14.065375   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:14.065431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:14.105548   62139 cri.go:89] found id: ""
	I0416 01:03:14.105571   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.105579   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:14.105583   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:14.105644   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:14.146891   62139 cri.go:89] found id: ""
	I0416 01:03:14.146915   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.146922   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:14.146927   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:14.146972   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:14.183905   62139 cri.go:89] found id: ""
	I0416 01:03:14.183937   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.183948   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:14.183954   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:14.184002   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:14.219878   62139 cri.go:89] found id: ""
	I0416 01:03:14.219905   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.219915   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:14.219922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:14.219978   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:14.256284   62139 cri.go:89] found id: ""
	I0416 01:03:14.256310   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.256317   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:14.256323   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:14.256381   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:14.295932   62139 cri.go:89] found id: ""
	I0416 01:03:14.295958   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.295966   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:14.295971   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:14.296025   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:14.333202   62139 cri.go:89] found id: ""
	I0416 01:03:14.333226   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.333235   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:14.333242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:14.333302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:14.370034   62139 cri.go:89] found id: ""
	I0416 01:03:14.370059   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.370066   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:14.370074   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:14.370092   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:14.424626   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:14.424669   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:14.441842   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:14.441872   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:14.515899   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:14.515926   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:14.515944   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:14.599956   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:14.599991   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:17.157610   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:17.171737   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:17.171800   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:17.214327   62139 cri.go:89] found id: ""
	I0416 01:03:17.214354   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.214364   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:17.214371   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:17.214433   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:17.255896   62139 cri.go:89] found id: ""
	I0416 01:03:17.255924   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.255939   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:17.255946   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:17.256005   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:17.298470   62139 cri.go:89] found id: ""
	I0416 01:03:17.298498   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.298512   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:17.298520   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:17.298580   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:17.338810   62139 cri.go:89] found id: ""
	I0416 01:03:17.338834   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.338842   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:17.338847   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:17.338899   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:17.375980   62139 cri.go:89] found id: ""
	I0416 01:03:17.376012   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.376019   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:17.376024   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:17.376076   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:17.411374   62139 cri.go:89] found id: ""
	I0416 01:03:17.411400   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.411408   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:17.411413   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:17.411463   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:17.452916   62139 cri.go:89] found id: ""
	I0416 01:03:17.452951   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.452962   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:17.452969   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:17.453037   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:17.492459   62139 cri.go:89] found id: ""
	I0416 01:03:17.492489   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.492500   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:17.492512   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:17.492527   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:17.541780   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:17.541814   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:17.558831   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:17.558867   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:17.635332   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:17.635351   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:17.635362   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:17.715778   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:17.715809   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:20.260621   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:20.274721   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:20.274791   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:20.311965   62139 cri.go:89] found id: ""
	I0416 01:03:20.311991   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.312002   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:20.312009   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:20.312069   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:20.350316   62139 cri.go:89] found id: ""
	I0416 01:03:20.350346   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.350356   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:20.350363   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:20.350414   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:20.404666   62139 cri.go:89] found id: ""
	I0416 01:03:20.404692   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.404700   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:20.404705   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:20.404753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:20.441223   62139 cri.go:89] found id: ""
	I0416 01:03:20.441254   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.441267   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:20.441275   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:20.441340   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:20.480535   62139 cri.go:89] found id: ""
	I0416 01:03:20.480596   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.480606   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:20.480613   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:20.480680   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:20.517520   62139 cri.go:89] found id: ""
	I0416 01:03:20.517543   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.517550   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:20.517556   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:20.517614   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:20.556067   62139 cri.go:89] found id: ""
	I0416 01:03:20.556097   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.556107   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:20.556114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:20.556177   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:20.594901   62139 cri.go:89] found id: ""
	I0416 01:03:20.594932   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.594939   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:20.594947   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:20.594958   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:20.673759   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:20.673795   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:20.721407   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:20.721443   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:20.772957   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:20.772989   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:20.787902   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:20.787932   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:20.863445   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:23.363637   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:23.377916   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:23.377991   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:23.415642   62139 cri.go:89] found id: ""
	I0416 01:03:23.415671   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.415679   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:23.415685   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:23.415732   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:23.452788   62139 cri.go:89] found id: ""
	I0416 01:03:23.452812   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.452819   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:23.452829   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:23.452878   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:23.488758   62139 cri.go:89] found id: ""
	I0416 01:03:23.488785   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.488794   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:23.488801   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:23.488862   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:23.526542   62139 cri.go:89] found id: ""
	I0416 01:03:23.526574   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.526584   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:23.526592   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:23.526661   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:23.562481   62139 cri.go:89] found id: ""
	I0416 01:03:23.562505   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.562512   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:23.562518   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:23.562579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:23.599119   62139 cri.go:89] found id: ""
	I0416 01:03:23.599145   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.599155   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:23.599162   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:23.599241   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:23.642445   62139 cri.go:89] found id: ""
	I0416 01:03:23.642474   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.642485   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:23.642492   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:23.642557   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:23.678091   62139 cri.go:89] found id: ""
	I0416 01:03:23.678113   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.678121   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:23.678129   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:23.678140   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:23.731668   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:23.731703   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:23.746413   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:23.746444   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:23.821885   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:23.821908   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:23.821923   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:23.901836   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:23.901872   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:26.444935   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:26.459240   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:26.459308   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:26.499208   62139 cri.go:89] found id: ""
	I0416 01:03:26.499237   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.499249   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:26.499256   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:26.499318   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:26.536220   62139 cri.go:89] found id: ""
	I0416 01:03:26.536258   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.536270   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:26.536277   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:26.536342   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:26.576217   62139 cri.go:89] found id: ""
	I0416 01:03:26.576241   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.576249   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:26.576254   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:26.576314   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:26.612343   62139 cri.go:89] found id: ""
	I0416 01:03:26.612369   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.612378   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:26.612385   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:26.612448   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:26.651323   62139 cri.go:89] found id: ""
	I0416 01:03:26.651353   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.651365   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:26.651384   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:26.651453   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:26.688844   62139 cri.go:89] found id: ""
	I0416 01:03:26.688874   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.688885   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:26.688891   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:26.688969   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:26.724362   62139 cri.go:89] found id: ""
	I0416 01:03:26.724387   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.724395   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:26.724401   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:26.724455   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:26.767766   62139 cri.go:89] found id: ""
	I0416 01:03:26.767795   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.767806   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:26.767816   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:26.767837   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:26.788269   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:26.788297   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:26.884802   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:26.884822   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:26.884834   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:26.964007   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:26.964044   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:27.003719   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:27.003745   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:29.563218   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:29.579014   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:29.579078   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:29.620739   62139 cri.go:89] found id: ""
	I0416 01:03:29.620769   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.620780   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:29.620787   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:29.620850   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:29.658165   62139 cri.go:89] found id: ""
	I0416 01:03:29.658192   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.658199   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:29.658205   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:29.658252   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:29.693893   62139 cri.go:89] found id: ""
	I0416 01:03:29.693921   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.693929   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:29.693935   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:29.693985   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:29.737808   62139 cri.go:89] found id: ""
	I0416 01:03:29.737836   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.737846   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:29.737851   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:29.737910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:29.777382   62139 cri.go:89] found id: ""
	I0416 01:03:29.777408   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.777416   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:29.777422   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:29.777473   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:29.815633   62139 cri.go:89] found id: ""
	I0416 01:03:29.815659   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.815668   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:29.815682   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:29.815743   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:29.858790   62139 cri.go:89] found id: ""
	I0416 01:03:29.858820   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.858831   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:29.858839   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:29.858899   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:29.897085   62139 cri.go:89] found id: ""
	I0416 01:03:29.897120   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.897131   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:29.897142   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:29.897169   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:29.951231   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:29.951266   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:29.965539   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:29.965565   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:30.045138   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:30.045170   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:30.045186   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:30.120575   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:30.120606   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:32.662210   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:32.675833   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:32.675903   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:32.712104   62139 cri.go:89] found id: ""
	I0416 01:03:32.712129   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.712136   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:32.712141   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:32.712198   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:32.749617   62139 cri.go:89] found id: ""
	I0416 01:03:32.749644   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.749652   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:32.749658   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:32.749723   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:32.785069   62139 cri.go:89] found id: ""
	I0416 01:03:32.785100   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.785110   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:32.785116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:32.785191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:32.825871   62139 cri.go:89] found id: ""
	I0416 01:03:32.825912   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.825922   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:32.825928   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:32.826008   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:32.868294   62139 cri.go:89] found id: ""
	I0416 01:03:32.868321   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.868328   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:32.868334   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:32.868401   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:32.907764   62139 cri.go:89] found id: ""
	I0416 01:03:32.907789   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.907796   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:32.907802   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:32.907870   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:32.946112   62139 cri.go:89] found id: ""
	I0416 01:03:32.946137   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.946144   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:32.946155   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:32.946215   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:32.985343   62139 cri.go:89] found id: ""
	I0416 01:03:32.985374   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.985385   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:32.985395   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:32.985415   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:33.063117   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:33.063154   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:33.113739   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:33.113773   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:33.163466   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:33.163508   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:33.178368   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:33.178397   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:33.259509   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:35.760004   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:35.774161   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:35.774237   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:35.812551   62139 cri.go:89] found id: ""
	I0416 01:03:35.812580   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.812589   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:35.812594   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:35.812642   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:35.853134   62139 cri.go:89] found id: ""
	I0416 01:03:35.853177   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.853187   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:35.853195   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:35.853255   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:35.894210   62139 cri.go:89] found id: ""
	I0416 01:03:35.894246   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.894254   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:35.894259   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:35.894330   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:35.928986   62139 cri.go:89] found id: ""
	I0416 01:03:35.929010   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.929019   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:35.929027   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:35.929090   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:35.970688   62139 cri.go:89] found id: ""
	I0416 01:03:35.970712   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.970719   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:35.970725   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:35.970783   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:36.005744   62139 cri.go:89] found id: ""
	I0416 01:03:36.005771   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.005778   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:36.005783   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:36.005829   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:36.044932   62139 cri.go:89] found id: ""
	I0416 01:03:36.044966   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.044977   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:36.044984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:36.045051   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:36.080488   62139 cri.go:89] found id: ""
	I0416 01:03:36.080516   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.080527   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:36.080538   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:36.080552   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:36.132956   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:36.133000   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:36.147070   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:36.147097   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:36.226640   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:36.226670   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:36.226684   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:36.307205   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:36.307249   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:38.849685   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:38.863817   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:38.863897   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:38.902418   62139 cri.go:89] found id: ""
	I0416 01:03:38.902445   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.902455   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:38.902462   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:38.902533   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:38.937811   62139 cri.go:89] found id: ""
	I0416 01:03:38.937838   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.937845   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:38.937850   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:38.937900   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:38.972380   62139 cri.go:89] found id: ""
	I0416 01:03:38.972403   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.972411   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:38.972416   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:38.972466   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:39.007572   62139 cri.go:89] found id: ""
	I0416 01:03:39.007595   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.007603   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:39.007608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:39.007651   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:39.049355   62139 cri.go:89] found id: ""
	I0416 01:03:39.049382   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.049391   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:39.049398   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:39.049459   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:39.084535   62139 cri.go:89] found id: ""
	I0416 01:03:39.084565   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.084574   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:39.084581   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:39.084645   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:39.125027   62139 cri.go:89] found id: ""
	I0416 01:03:39.125055   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.125073   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:39.125080   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:39.125136   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:39.164506   62139 cri.go:89] found id: ""
	I0416 01:03:39.164537   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.164547   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:39.164557   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:39.164573   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:39.203447   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:39.203483   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:39.259087   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:39.259122   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:39.273611   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:39.273637   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:39.352372   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:39.352392   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:39.352407   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:41.938575   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:41.952937   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:41.953019   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:41.990771   62139 cri.go:89] found id: ""
	I0416 01:03:41.990802   62139 logs.go:276] 0 containers: []
	W0416 01:03:41.990811   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:41.990819   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:41.990881   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:42.027338   62139 cri.go:89] found id: ""
	I0416 01:03:42.027367   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.027374   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:42.027379   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:42.027431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:42.068348   62139 cri.go:89] found id: ""
	I0416 01:03:42.068377   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.068387   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:42.068394   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:42.068457   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:42.108157   62139 cri.go:89] found id: ""
	I0416 01:03:42.108181   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.108187   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:42.108193   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:42.108244   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:42.149749   62139 cri.go:89] found id: ""
	I0416 01:03:42.149770   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.149777   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:42.149784   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:42.149848   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:42.185322   62139 cri.go:89] found id: ""
	I0416 01:03:42.185349   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.185360   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:42.185368   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:42.185435   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:42.224334   62139 cri.go:89] found id: ""
	I0416 01:03:42.224359   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.224370   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:42.224376   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:42.224435   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:42.263466   62139 cri.go:89] found id: ""
	I0416 01:03:42.263494   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.263502   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:42.263509   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:42.263522   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:42.315106   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:42.315139   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:42.329394   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:42.329425   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:42.405267   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:42.405305   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:42.405321   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:42.486126   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:42.486168   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:45.027718   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:45.042387   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:45.042453   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:45.080790   62139 cri.go:89] found id: ""
	I0416 01:03:45.080814   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.080823   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:45.080829   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:45.080875   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:45.121278   62139 cri.go:89] found id: ""
	I0416 01:03:45.121306   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.121317   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:45.121324   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:45.121383   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:45.158076   62139 cri.go:89] found id: ""
	I0416 01:03:45.158099   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.158107   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:45.158116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:45.158162   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:45.195577   62139 cri.go:89] found id: ""
	I0416 01:03:45.195608   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.195619   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:45.195627   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:45.195685   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:45.239230   62139 cri.go:89] found id: ""
	I0416 01:03:45.239257   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.239267   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:45.239275   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:45.239326   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:45.279193   62139 cri.go:89] found id: ""
	I0416 01:03:45.279220   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.279227   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:45.279232   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:45.279280   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:45.314876   62139 cri.go:89] found id: ""
	I0416 01:03:45.314908   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.314916   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:45.314922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:45.314970   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:45.351699   62139 cri.go:89] found id: ""
	I0416 01:03:45.351723   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.351730   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:45.351738   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:45.351750   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:45.392681   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:45.392708   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:45.446564   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:45.446605   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:45.460541   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:45.460564   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:45.535287   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:45.535319   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:45.535334   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:48.117476   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:48.133341   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:48.133402   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:48.171230   62139 cri.go:89] found id: ""
	I0416 01:03:48.171263   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.171273   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:48.171280   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:48.171337   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:48.206188   62139 cri.go:89] found id: ""
	I0416 01:03:48.206218   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.206229   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:48.206236   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:48.206294   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:48.242349   62139 cri.go:89] found id: ""
	I0416 01:03:48.242377   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.242384   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:48.242389   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:48.242437   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:48.278324   62139 cri.go:89] found id: ""
	I0416 01:03:48.278347   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.278355   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:48.278360   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:48.278406   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:48.315727   62139 cri.go:89] found id: ""
	I0416 01:03:48.315753   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.315763   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:48.315770   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:48.315828   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:48.354146   62139 cri.go:89] found id: ""
	I0416 01:03:48.354169   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.354176   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:48.354182   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:48.354242   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:48.393951   62139 cri.go:89] found id: ""
	I0416 01:03:48.393989   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.394000   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:48.394007   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:48.394081   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:48.431849   62139 cri.go:89] found id: ""
	I0416 01:03:48.431887   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.431895   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:48.431903   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:48.431917   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:48.446210   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:48.446242   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:48.517459   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:48.517485   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:48.517500   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:48.596320   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:48.596356   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:48.639700   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:48.639733   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:51.197396   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:51.211803   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:51.211889   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:51.250768   62139 cri.go:89] found id: ""
	I0416 01:03:51.250793   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.250802   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:51.250810   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:51.250872   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:51.291389   62139 cri.go:89] found id: ""
	I0416 01:03:51.291415   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.291421   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:51.291429   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:51.291478   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:51.332466   62139 cri.go:89] found id: ""
	I0416 01:03:51.332490   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.332499   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:51.332504   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:51.332549   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:51.367731   62139 cri.go:89] found id: ""
	I0416 01:03:51.367759   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.367767   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:51.367773   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:51.367829   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:51.400567   62139 cri.go:89] found id: ""
	I0416 01:03:51.400599   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.400609   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:51.400616   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:51.400679   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:51.433561   62139 cri.go:89] found id: ""
	I0416 01:03:51.433590   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.433598   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:51.433608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:51.433666   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:51.469136   62139 cri.go:89] found id: ""
	I0416 01:03:51.469179   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.469189   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:51.469196   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:51.469255   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:51.504410   62139 cri.go:89] found id: ""
	I0416 01:03:51.504442   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.504452   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:51.504462   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:51.504480   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:51.557420   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:51.557449   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:51.571481   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:51.571506   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:51.648722   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:51.648744   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:51.648755   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:51.728945   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:51.728978   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:54.272503   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:54.286573   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:54.286646   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:54.321084   62139 cri.go:89] found id: ""
	I0416 01:03:54.321115   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.321125   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:54.321133   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:54.321208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:54.366333   62139 cri.go:89] found id: ""
	I0416 01:03:54.366364   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.366374   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:54.366380   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:54.366437   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:54.406267   62139 cri.go:89] found id: ""
	I0416 01:03:54.406317   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.406328   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:54.406336   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:54.406405   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:54.446853   62139 cri.go:89] found id: ""
	I0416 01:03:54.446883   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.446894   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:54.446901   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:54.446956   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:54.487658   62139 cri.go:89] found id: ""
	I0416 01:03:54.487683   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.487690   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:54.487696   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:54.487753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:54.530189   62139 cri.go:89] found id: ""
	I0416 01:03:54.530216   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.530226   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:54.530232   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:54.530289   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:54.571317   62139 cri.go:89] found id: ""
	I0416 01:03:54.571341   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.571349   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:54.571354   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:54.571416   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:54.612432   62139 cri.go:89] found id: ""
	I0416 01:03:54.612458   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.612467   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:54.612478   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:54.612493   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:54.666599   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:54.666629   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:54.680880   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:54.680915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:54.757365   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:54.757386   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:54.757398   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:54.834436   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:54.834468   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:57.405516   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:57.420694   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:57.420773   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:57.460338   62139 cri.go:89] found id: ""
	I0416 01:03:57.460367   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.460374   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:57.460381   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:57.460442   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:57.498121   62139 cri.go:89] found id: ""
	I0416 01:03:57.498150   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.498160   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:57.498167   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:57.498228   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:57.536959   62139 cri.go:89] found id: ""
	I0416 01:03:57.536989   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.537005   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:57.537014   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:57.537077   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:57.575633   62139 cri.go:89] found id: ""
	I0416 01:03:57.575662   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.575673   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:57.575680   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:57.575743   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:57.614459   62139 cri.go:89] found id: ""
	I0416 01:03:57.614491   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.614501   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:57.614509   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:57.614568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:57.657078   62139 cri.go:89] found id: ""
	I0416 01:03:57.657109   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.657120   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:57.657127   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:57.657204   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:57.693882   62139 cri.go:89] found id: ""
	I0416 01:03:57.693904   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.693911   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:57.693922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:57.693969   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:57.731283   62139 cri.go:89] found id: ""
	I0416 01:03:57.731312   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.731320   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:57.731327   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:57.731338   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:57.782618   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:57.782656   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:57.796763   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:57.796794   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:57.869629   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:57.869652   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:57.869665   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:57.948859   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:57.948892   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:00.487682   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:00.501095   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:00.501182   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:00.537902   62139 cri.go:89] found id: ""
	I0416 01:04:00.537931   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.537939   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:00.537945   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:00.537994   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:00.574164   62139 cri.go:89] found id: ""
	I0416 01:04:00.574203   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.574214   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:00.574222   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:00.574287   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:00.629592   62139 cri.go:89] found id: ""
	I0416 01:04:00.629615   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.629622   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:00.629627   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:00.629679   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:00.672102   62139 cri.go:89] found id: ""
	I0416 01:04:00.672127   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.672134   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:00.672141   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:00.672201   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:00.715040   62139 cri.go:89] found id: ""
	I0416 01:04:00.715064   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.715072   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:00.715078   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:00.715139   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:00.751113   62139 cri.go:89] found id: ""
	I0416 01:04:00.751137   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.751146   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:00.751152   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:00.751204   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:00.787613   62139 cri.go:89] found id: ""
	I0416 01:04:00.787644   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.787653   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:00.787660   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:00.787721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:00.824244   62139 cri.go:89] found id: ""
	I0416 01:04:00.824271   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.824280   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:00.824291   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:00.824304   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:00.899977   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:00.900014   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:00.900029   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:00.982317   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:00.982350   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:01.026354   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:01.026393   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:01.080393   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:01.080441   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:03.595966   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:03.609190   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:03.609253   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:03.647151   62139 cri.go:89] found id: ""
	I0416 01:04:03.647183   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.647197   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:03.647203   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:03.647250   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:03.685211   62139 cri.go:89] found id: ""
	I0416 01:04:03.685239   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.685248   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:03.685254   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:03.685303   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:03.720928   62139 cri.go:89] found id: ""
	I0416 01:04:03.720949   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.720956   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:03.720961   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:03.721035   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:03.759179   62139 cri.go:89] found id: ""
	I0416 01:04:03.759210   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.759220   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:03.759228   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:03.759290   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:03.795670   62139 cri.go:89] found id: ""
	I0416 01:04:03.795700   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.795710   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:03.795717   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:03.795785   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:03.832944   62139 cri.go:89] found id: ""
	I0416 01:04:03.832971   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.832980   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:03.832988   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:03.833053   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:03.869211   62139 cri.go:89] found id: ""
	I0416 01:04:03.869238   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.869248   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:03.869256   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:03.869317   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:03.905859   62139 cri.go:89] found id: ""
	I0416 01:04:03.905888   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.905896   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:03.905904   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:03.905915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:03.957057   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:03.957088   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:03.972309   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:03.972344   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:04.049927   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:04.049950   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:04.049965   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:04.136395   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:04.136435   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:06.676667   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:06.690062   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:06.690125   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:06.733734   62139 cri.go:89] found id: ""
	I0416 01:04:06.733758   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.733773   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:06.733782   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:06.733835   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:06.773112   62139 cri.go:89] found id: ""
	I0416 01:04:06.773140   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.773147   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:06.773152   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:06.773231   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:06.812786   62139 cri.go:89] found id: ""
	I0416 01:04:06.812809   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.812817   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:06.812822   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:06.812870   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:06.853995   62139 cri.go:89] found id: ""
	I0416 01:04:06.854022   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.854029   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:06.854034   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:06.854088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:06.893809   62139 cri.go:89] found id: ""
	I0416 01:04:06.893841   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.893848   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:06.893853   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:06.893909   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:06.929389   62139 cri.go:89] found id: ""
	I0416 01:04:06.929419   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.929430   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:06.929437   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:06.929518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:06.968278   62139 cri.go:89] found id: ""
	I0416 01:04:06.968303   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.968311   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:06.968316   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:06.968364   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:07.018932   62139 cri.go:89] found id: ""
	I0416 01:04:07.018965   62139 logs.go:276] 0 containers: []
	W0416 01:04:07.018976   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:07.018989   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:07.019003   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:07.083611   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:07.083645   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:07.110126   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:07.110152   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:07.186262   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:07.186290   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:07.186305   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:07.263139   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:07.263170   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:09.807489   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:09.822045   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:09.822110   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:09.867444   62139 cri.go:89] found id: ""
	I0416 01:04:09.867469   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.867480   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:09.867487   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:09.867538   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:09.904280   62139 cri.go:89] found id: ""
	I0416 01:04:09.904312   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.904323   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:09.904330   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:09.904389   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:09.941066   62139 cri.go:89] found id: ""
	I0416 01:04:09.941091   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.941099   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:09.941107   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:09.941189   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:09.975739   62139 cri.go:89] found id: ""
	I0416 01:04:09.975767   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.975777   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:09.975785   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:09.975844   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:10.011414   62139 cri.go:89] found id: ""
	I0416 01:04:10.011444   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.011454   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:10.011461   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:10.011528   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:10.045670   62139 cri.go:89] found id: ""
	I0416 01:04:10.045695   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.045704   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:10.045711   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:10.045777   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:10.082320   62139 cri.go:89] found id: ""
	I0416 01:04:10.082352   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.082361   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:10.082368   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:10.082428   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:10.120453   62139 cri.go:89] found id: ""
	I0416 01:04:10.120482   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.120492   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:10.120501   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:10.120515   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:10.200213   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:10.200251   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:10.251709   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:10.251742   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:10.307348   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:10.307382   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:10.321293   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:10.321319   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:10.401361   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:12.901763   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:12.916308   62139 kubeadm.go:591] duration metric: took 4m4.703830076s to restartPrimaryControlPlane
	W0416 01:04:12.916384   62139 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:04:12.916416   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:04:17.897436   62139 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.980993606s)
	I0416 01:04:17.897592   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:04:17.914655   62139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:04:17.927482   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:04:17.940210   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:04:17.940233   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:04:17.940274   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:04:17.951037   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:04:17.951106   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:04:17.962341   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:04:17.972436   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:04:17.972500   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:04:17.983198   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:04:17.992856   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:04:17.992912   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:04:18.003122   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:04:18.014064   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:04:18.014117   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:04:18.024854   62139 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:04:18.101381   62139 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 01:04:18.101436   62139 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:04:18.246529   62139 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:04:18.246687   62139 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:04:18.246802   62139 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:04:18.456847   62139 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:04:18.458980   62139 out.go:204]   - Generating certificates and keys ...
	I0416 01:04:18.459096   62139 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:04:18.459190   62139 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:04:18.459294   62139 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:04:18.459381   62139 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:04:18.459473   62139 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:04:18.459548   62139 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:04:18.459631   62139 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:04:18.459721   62139 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:04:18.459822   62139 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:04:18.460281   62139 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:04:18.460387   62139 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:04:18.460475   62139 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:04:18.564910   62139 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:04:18.806406   62139 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:04:18.890124   62139 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:04:19.046415   62139 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:04:19.063159   62139 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:04:19.063301   62139 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:04:19.063415   62139 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:04:19.229066   62139 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:04:19.231110   62139 out.go:204]   - Booting up control plane ...
	I0416 01:04:19.231246   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:04:19.248833   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:04:19.250340   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:04:19.251664   62139 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:04:19.254678   62139 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:04:59.255478   62139 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 01:04:59.256524   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:04:59.256807   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:04.257472   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:04.257756   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:14.258629   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:14.258807   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:34.259492   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:34.259704   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:06:14.261576   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:06:14.261834   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:06:14.261849   62139 kubeadm.go:309] 
	I0416 01:06:14.261890   62139 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 01:06:14.261973   62139 kubeadm.go:309] 		timed out waiting for the condition
	I0416 01:06:14.262006   62139 kubeadm.go:309] 
	I0416 01:06:14.262051   62139 kubeadm.go:309] 	This error is likely caused by:
	I0416 01:06:14.262082   62139 kubeadm.go:309] 		- The kubelet is not running
	I0416 01:06:14.262174   62139 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 01:06:14.262199   62139 kubeadm.go:309] 
	I0416 01:06:14.262357   62139 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 01:06:14.262414   62139 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 01:06:14.262471   62139 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 01:06:14.262481   62139 kubeadm.go:309] 
	I0416 01:06:14.262610   62139 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 01:06:14.262707   62139 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 01:06:14.262717   62139 kubeadm.go:309] 
	I0416 01:06:14.262867   62139 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 01:06:14.263010   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 01:06:14.263142   62139 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 01:06:14.263211   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 01:06:14.263234   62139 kubeadm.go:309] 
	I0416 01:06:14.264084   62139 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:06:14.264204   62139 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 01:06:14.264312   62139 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0416 01:06:14.264460   62139 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0416 01:06:14.264526   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:06:15.653692   62139 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.389136497s)
	I0416 01:06:15.653831   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:06:15.669141   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:06:15.679485   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:06:15.679511   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:06:15.679556   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:06:15.689898   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:06:15.689974   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:06:15.700563   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:06:15.710363   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:06:15.710445   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:06:15.719877   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:06:15.728947   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:06:15.729002   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:06:15.739360   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:06:15.749479   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:06:15.749557   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:06:15.760930   62139 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:06:16.000974   62139 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:08:12.327133   62139 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 01:08:12.327246   62139 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0416 01:08:12.328995   62139 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 01:08:12.329092   62139 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:08:12.329220   62139 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:08:12.329302   62139 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:08:12.329440   62139 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:08:12.329537   62139 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:08:12.331381   62139 out.go:204]   - Generating certificates and keys ...
	I0416 01:08:12.331474   62139 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:08:12.331558   62139 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:08:12.331658   62139 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:08:12.331742   62139 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:08:12.331830   62139 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:08:12.331910   62139 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:08:12.331968   62139 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:08:12.332020   62139 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:08:12.332085   62139 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:08:12.332159   62139 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:08:12.332210   62139 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:08:12.332297   62139 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:08:12.332376   62139 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:08:12.332466   62139 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:08:12.332547   62139 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:08:12.332642   62139 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:08:12.332790   62139 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:08:12.332895   62139 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:08:12.332938   62139 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:08:12.333002   62139 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:08:12.334632   62139 out.go:204]   - Booting up control plane ...
	I0416 01:08:12.334737   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:08:12.334837   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:08:12.334928   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:08:12.335009   62139 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:08:12.335162   62139 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:08:12.335241   62139 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 01:08:12.335333   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.335541   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.335613   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.335771   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.335848   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336035   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336109   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336365   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336438   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336704   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336716   62139 kubeadm.go:309] 
	I0416 01:08:12.336779   62139 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 01:08:12.336827   62139 kubeadm.go:309] 		timed out waiting for the condition
	I0416 01:08:12.336834   62139 kubeadm.go:309] 
	I0416 01:08:12.336883   62139 kubeadm.go:309] 	This error is likely caused by:
	I0416 01:08:12.336922   62139 kubeadm.go:309] 		- The kubelet is not running
	I0416 01:08:12.337025   62139 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 01:08:12.337036   62139 kubeadm.go:309] 
	I0416 01:08:12.337145   62139 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 01:08:12.337211   62139 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 01:08:12.337245   62139 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 01:08:12.337253   62139 kubeadm.go:309] 
	I0416 01:08:12.337340   62139 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 01:08:12.337428   62139 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 01:08:12.337436   62139 kubeadm.go:309] 
	I0416 01:08:12.337529   62139 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 01:08:12.337602   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 01:08:12.337701   62139 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 01:08:12.337870   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 01:08:12.337957   62139 kubeadm.go:393] duration metric: took 8m4.174818047s to StartCluster
	I0416 01:08:12.337969   62139 kubeadm.go:309] 
	I0416 01:08:12.338009   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:08:12.338067   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:08:12.391937   62139 cri.go:89] found id: ""
	I0416 01:08:12.391963   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.391986   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:08:12.391994   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:08:12.392072   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:08:12.430575   62139 cri.go:89] found id: ""
	I0416 01:08:12.430602   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.430616   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:08:12.430623   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:08:12.430685   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:08:12.469115   62139 cri.go:89] found id: ""
	I0416 01:08:12.469143   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.469152   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:08:12.469173   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:08:12.469228   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:08:12.508599   62139 cri.go:89] found id: ""
	I0416 01:08:12.508630   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.508640   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:08:12.508648   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:08:12.508698   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:08:12.547785   62139 cri.go:89] found id: ""
	I0416 01:08:12.547817   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.547829   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:08:12.547836   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:08:12.547910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:08:12.599526   62139 cri.go:89] found id: ""
	I0416 01:08:12.599549   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.599557   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:08:12.599563   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:08:12.599612   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:08:12.639914   62139 cri.go:89] found id: ""
	I0416 01:08:12.639944   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.639954   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:08:12.639962   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:08:12.640041   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:08:12.676025   62139 cri.go:89] found id: ""
	I0416 01:08:12.676057   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.676066   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:08:12.676079   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:08:12.676100   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:08:12.774744   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:08:12.774769   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:08:12.774785   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:08:12.902751   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:08:12.902787   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:08:12.947370   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:08:12.947406   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:08:13.002186   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:08:13.002223   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0416 01:08:13.017193   62139 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0416 01:08:13.017234   62139 out.go:239] * 
	* 
	W0416 01:08:13.017283   62139 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 01:08:13.017304   62139 out.go:239] * 
	* 
	W0416 01:08:13.018151   62139 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 01:08:13.021371   62139 out.go:177] 
	W0416 01:08:13.022572   62139 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 01:08:13.022640   62139 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0416 01:08:13.022670   62139 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0416 01:08:13.024248   62139 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-800769 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-800769 -n old-k8s-version-800769
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-800769 -n old-k8s-version-800769: exit status 2 (244.665701ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-800769 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-800769 logs -n 25: (1.499057494s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p cert-expiration-359535                              | cert-expiration-359535       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:52 UTC | 16 Apr 24 00:52 UTC |
	| start   | -p newest-cni-012509 --memory=2200 --alsologtostderr   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:52 UTC | 16 Apr 24 00:53 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p newest-cni-012509             | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-012509                  | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-012509 --memory=2200 --alsologtostderr   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:54 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| image   | newest-cni-012509 image list                           | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --format=json                                          |                              |         |                |                     |                     |
	| pause   | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	| delete  | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	| delete  | -p                                                     | disable-driver-mounts-988802 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | disable-driver-mounts-988802                           |                              |         |                |                     |                     |
	| start   | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653942       | default-k8s-diff-port-653942 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-572602                  | no-preload-572602            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653942 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 01:06 UTC |
	|         | default-k8s-diff-port-653942                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-800769        | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| start   | -p no-preload-572602                                   | no-preload-572602            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 01:05 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-617092            | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-800769                              | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-800769             | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-800769                              | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-617092                 | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:58 UTC | 16 Apr 24 01:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 00:58:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 00:58:42.797832   62747 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:58:42.797983   62747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:58:42.797994   62747 out.go:304] Setting ErrFile to fd 2...
	I0416 00:58:42.797998   62747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:58:42.798182   62747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:58:42.798686   62747 out.go:298] Setting JSON to false
	I0416 00:58:42.799629   62747 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6067,"bootTime":1713223056,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 00:58:42.799687   62747 start.go:139] virtualization: kvm guest
	I0416 00:58:42.801878   62747 out.go:177] * [embed-certs-617092] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 00:58:42.803202   62747 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 00:58:42.804389   62747 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 00:58:42.803288   62747 notify.go:220] Checking for updates...
	I0416 00:58:42.805742   62747 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 00:58:42.807023   62747 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:58:42.808185   62747 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 00:58:42.809402   62747 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 00:58:42.811188   62747 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:58:42.811772   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:58:42.811833   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:58:42.826377   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44973
	I0416 00:58:42.826730   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:58:42.827217   62747 main.go:141] libmachine: Using API Version  1
	I0416 00:58:42.827233   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:58:42.827541   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:58:42.827737   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 00:58:42.827964   62747 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 00:58:42.828239   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:58:42.828274   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:58:42.842499   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
	I0416 00:58:42.842872   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:58:42.843283   62747 main.go:141] libmachine: Using API Version  1
	I0416 00:58:42.843300   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:58:42.843636   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:58:42.843830   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 00:58:42.874583   62747 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 00:58:42.875910   62747 start.go:297] selected driver: kvm2
	I0416 00:58:42.875933   62747 start.go:901] validating driver "kvm2" against &{Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:58:42.876072   62747 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 00:58:42.876741   62747 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:58:42.876826   62747 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 00:58:42.890834   62747 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 00:58:42.891212   62747 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 00:58:42.891270   62747 cni.go:84] Creating CNI manager for ""
	I0416 00:58:42.891283   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:58:42.891314   62747 start.go:340] cluster config:
	{Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-617092 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:58:42.891412   62747 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:58:42.893179   62747 out.go:177] * Starting "embed-certs-617092" primary control-plane node in "embed-certs-617092" cluster
	I0416 00:58:42.894232   62747 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 00:58:42.894260   62747 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 00:58:42.894267   62747 cache.go:56] Caching tarball of preloaded images
	I0416 00:58:42.894353   62747 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 00:58:42.894365   62747 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 00:58:42.894458   62747 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/config.json ...
	I0416 00:58:42.894628   62747 start.go:360] acquireMachinesLock for embed-certs-617092: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:58:47.545405   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:58:50.617454   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:58:56.697459   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:58:59.769461   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:05.849462   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:08.921459   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:15.001430   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:21.078070   61500 start.go:364] duration metric: took 4m33.431027521s to acquireMachinesLock for "no-preload-572602"
	I0416 00:59:21.078134   61500 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:59:21.078152   61500 fix.go:54] fixHost starting: 
	I0416 00:59:21.078760   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:59:21.078809   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:59:21.093476   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36767
	I0416 00:59:21.093934   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:59:21.094422   61500 main.go:141] libmachine: Using API Version  1
	I0416 00:59:21.094448   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:59:21.094749   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:59:21.094902   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:21.095048   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 00:59:21.096678   61500 fix.go:112] recreateIfNeeded on no-preload-572602: state=Stopped err=<nil>
	I0416 00:59:21.096697   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	W0416 00:59:21.096846   61500 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:59:21.098527   61500 out.go:177] * Restarting existing kvm2 VM for "no-preload-572602" ...
	I0416 00:59:18.073453   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:21.075633   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:59:21.075671   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 00:59:21.075991   61267 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653942"
	I0416 00:59:21.076014   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 00:59:21.076225   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 00:59:21.077923   61267 machine.go:97] duration metric: took 4m34.542024225s to provisionDockerMachine
	I0416 00:59:21.077967   61267 fix.go:56] duration metric: took 4m34.567596715s for fixHost
	I0416 00:59:21.077978   61267 start.go:83] releasing machines lock for "default-k8s-diff-port-653942", held for 4m34.567645643s
	W0416 00:59:21.078001   61267 start.go:713] error starting host: provision: host is not running
	W0416 00:59:21.078088   61267 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0416 00:59:21.078097   61267 start.go:728] Will try again in 5 seconds ...
	I0416 00:59:21.099788   61500 main.go:141] libmachine: (no-preload-572602) Calling .Start
	I0416 00:59:21.099966   61500 main.go:141] libmachine: (no-preload-572602) Ensuring networks are active...
	I0416 00:59:21.100656   61500 main.go:141] libmachine: (no-preload-572602) Ensuring network default is active
	I0416 00:59:21.100937   61500 main.go:141] libmachine: (no-preload-572602) Ensuring network mk-no-preload-572602 is active
	I0416 00:59:21.101282   61500 main.go:141] libmachine: (no-preload-572602) Getting domain xml...
	I0416 00:59:21.101905   61500 main.go:141] libmachine: (no-preload-572602) Creating domain...
	I0416 00:59:22.294019   61500 main.go:141] libmachine: (no-preload-572602) Waiting to get IP...
	I0416 00:59:22.294922   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:22.295294   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:22.295349   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:22.295262   62936 retry.go:31] will retry after 220.952312ms: waiting for machine to come up
	I0416 00:59:22.517753   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:22.518334   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:22.518358   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:22.518287   62936 retry.go:31] will retry after 377.547009ms: waiting for machine to come up
	I0416 00:59:26.081716   61267 start.go:360] acquireMachinesLock for default-k8s-diff-port-653942: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:59:22.897924   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:22.898442   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:22.898465   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:22.898394   62936 retry.go:31] will retry after 450.415086ms: waiting for machine to come up
	I0416 00:59:23.349893   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:23.350383   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:23.350420   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:23.350333   62936 retry.go:31] will retry after 385.340718ms: waiting for machine to come up
	I0416 00:59:23.736854   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:23.737225   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:23.737262   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:23.737205   62936 retry.go:31] will retry after 696.175991ms: waiting for machine to come up
	I0416 00:59:24.435231   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:24.435587   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:24.435616   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:24.435557   62936 retry.go:31] will retry after 644.402152ms: waiting for machine to come up
	I0416 00:59:25.081355   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:25.081660   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:25.081697   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:25.081626   62936 retry.go:31] will retry after 809.585997ms: waiting for machine to come up
	I0416 00:59:25.892402   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:25.892767   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:25.892797   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:25.892722   62936 retry.go:31] will retry after 1.07477705s: waiting for machine to come up
	I0416 00:59:26.969227   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:26.969617   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:26.969646   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:26.969561   62936 retry.go:31] will retry after 1.243937595s: waiting for machine to come up
	I0416 00:59:28.214995   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:28.215412   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:28.215433   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:28.215364   62936 retry.go:31] will retry after 1.775188434s: waiting for machine to come up
	I0416 00:59:29.993420   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:29.993825   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:29.993853   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:29.993779   62936 retry.go:31] will retry after 2.73873778s: waiting for machine to come up
	I0416 00:59:32.735350   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:32.735758   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:32.735809   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:32.735721   62936 retry.go:31] will retry after 2.208871896s: waiting for machine to come up
	I0416 00:59:34.947005   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:34.947400   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:34.947431   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:34.947358   62936 retry.go:31] will retry after 4.484880009s: waiting for machine to come up
	I0416 00:59:40.669954   62139 start.go:364] duration metric: took 3m18.466569456s to acquireMachinesLock for "old-k8s-version-800769"
	I0416 00:59:40.670015   62139 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:59:40.670038   62139 fix.go:54] fixHost starting: 
	I0416 00:59:40.670411   62139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:59:40.670448   62139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:59:40.686269   62139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39043
	I0416 00:59:40.686633   62139 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:59:40.687125   62139 main.go:141] libmachine: Using API Version  1
	I0416 00:59:40.687162   62139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:59:40.687481   62139 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:59:40.687672   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:40.687838   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetState
	I0416 00:59:40.689108   62139 fix.go:112] recreateIfNeeded on old-k8s-version-800769: state=Stopped err=<nil>
	I0416 00:59:40.689132   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	W0416 00:59:40.689286   62139 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:59:40.691869   62139 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-800769" ...
	I0416 00:59:40.693292   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .Start
	I0416 00:59:40.693450   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring networks are active...
	I0416 00:59:40.694152   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring network default is active
	I0416 00:59:40.694457   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring network mk-old-k8s-version-800769 is active
	I0416 00:59:40.694883   62139 main.go:141] libmachine: (old-k8s-version-800769) Getting domain xml...
	I0416 00:59:40.695720   62139 main.go:141] libmachine: (old-k8s-version-800769) Creating domain...
	I0416 00:59:41.913001   62139 main.go:141] libmachine: (old-k8s-version-800769) Waiting to get IP...
	I0416 00:59:41.913874   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:41.914260   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:41.914318   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:41.914237   63071 retry.go:31] will retry after 261.032707ms: waiting for machine to come up
	I0416 00:59:39.436244   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.436664   61500 main.go:141] libmachine: (no-preload-572602) Found IP for machine: 192.168.39.121
	I0416 00:59:39.436686   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has current primary IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.436694   61500 main.go:141] libmachine: (no-preload-572602) Reserving static IP address...
	I0416 00:59:39.437114   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "no-preload-572602", mac: "52:54:00:fb:a5:f3", ip: "192.168.39.121"} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.437151   61500 main.go:141] libmachine: (no-preload-572602) Reserved static IP address: 192.168.39.121
	I0416 00:59:39.437183   61500 main.go:141] libmachine: (no-preload-572602) DBG | skip adding static IP to network mk-no-preload-572602 - found existing host DHCP lease matching {name: "no-preload-572602", mac: "52:54:00:fb:a5:f3", ip: "192.168.39.121"}
	I0416 00:59:39.437197   61500 main.go:141] libmachine: (no-preload-572602) Waiting for SSH to be available...
	I0416 00:59:39.437215   61500 main.go:141] libmachine: (no-preload-572602) DBG | Getting to WaitForSSH function...
	I0416 00:59:39.439255   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.439613   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.439642   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.439723   61500 main.go:141] libmachine: (no-preload-572602) DBG | Using SSH client type: external
	I0416 00:59:39.439756   61500 main.go:141] libmachine: (no-preload-572602) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa (-rw-------)
	I0416 00:59:39.439799   61500 main.go:141] libmachine: (no-preload-572602) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.121 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 00:59:39.439822   61500 main.go:141] libmachine: (no-preload-572602) DBG | About to run SSH command:
	I0416 00:59:39.439835   61500 main.go:141] libmachine: (no-preload-572602) DBG | exit 0
	I0416 00:59:39.565190   61500 main.go:141] libmachine: (no-preload-572602) DBG | SSH cmd err, output: <nil>: 
	I0416 00:59:39.565584   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetConfigRaw
	I0416 00:59:39.566223   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:39.568572   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.568869   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.568906   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.569083   61500 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/config.json ...
	I0416 00:59:39.569300   61500 machine.go:94] provisionDockerMachine start ...
	I0416 00:59:39.569318   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:39.569526   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.571536   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.571842   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.571868   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.572004   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:39.572189   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.572352   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.572505   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:39.572751   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:39.572974   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:39.572991   61500 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:59:39.681544   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 00:59:39.681574   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetMachineName
	I0416 00:59:39.681845   61500 buildroot.go:166] provisioning hostname "no-preload-572602"
	I0416 00:59:39.681874   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetMachineName
	I0416 00:59:39.682088   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.684694   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.685029   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.685063   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.685259   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:39.685453   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.685608   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.685737   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:39.685887   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:39.686066   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:39.686090   61500 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-572602 && echo "no-preload-572602" | sudo tee /etc/hostname
	I0416 00:59:39.804124   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-572602
	
	I0416 00:59:39.804149   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.807081   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.807447   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.807480   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.807651   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:39.807860   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.808048   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.808202   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:39.808393   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:39.808618   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:39.808644   61500 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-572602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-572602/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-572602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:59:39.921781   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:59:39.921824   61500 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:59:39.921847   61500 buildroot.go:174] setting up certificates
	I0416 00:59:39.921857   61500 provision.go:84] configureAuth start
	I0416 00:59:39.921872   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetMachineName
	I0416 00:59:39.922150   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:39.924726   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.925052   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.925081   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.925199   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.927315   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.927820   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.927869   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.927934   61500 provision.go:143] copyHostCerts
	I0416 00:59:39.928005   61500 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:59:39.928031   61500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:59:39.928122   61500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:59:39.928231   61500 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:59:39.928241   61500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:59:39.928284   61500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:59:39.928370   61500 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:59:39.928379   61500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:59:39.928428   61500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:59:39.928498   61500 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.no-preload-572602 san=[127.0.0.1 192.168.39.121 localhost minikube no-preload-572602]
	I0416 00:59:40.000129   61500 provision.go:177] copyRemoteCerts
	I0416 00:59:40.000200   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:59:40.000236   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.002726   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.003028   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.003057   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.003168   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.003351   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.003471   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.003577   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.087468   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:59:40.115336   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0416 00:59:40.142695   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 00:59:40.169631   61500 provision.go:87] duration metric: took 247.759459ms to configureAuth
	I0416 00:59:40.169657   61500 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:59:40.169824   61500 config.go:182] Loaded profile config "no-preload-572602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 00:59:40.169906   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.172164   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.172503   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.172531   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.172689   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.172875   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.173033   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.173182   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.173311   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:40.173465   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:40.173480   61500 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:59:40.437143   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:59:40.437182   61500 machine.go:97] duration metric: took 867.868152ms to provisionDockerMachine
	I0416 00:59:40.437194   61500 start.go:293] postStartSetup for "no-preload-572602" (driver="kvm2")
	I0416 00:59:40.437211   61500 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:59:40.437233   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.437536   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:59:40.437564   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.440246   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.440596   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.440637   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.440759   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.440981   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.441186   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.441319   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.524157   61500 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:59:40.528556   61500 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:59:40.528580   61500 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:59:40.528647   61500 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:59:40.528756   61500 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:59:40.528877   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:59:40.538275   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:59:40.562693   61500 start.go:296] duration metric: took 125.48438ms for postStartSetup
	I0416 00:59:40.562728   61500 fix.go:56] duration metric: took 19.484586221s for fixHost
	I0416 00:59:40.562746   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.565410   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.565717   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.565756   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.565920   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.566103   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.566269   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.566438   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.566587   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:40.566738   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:40.566749   61500 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 00:59:40.669778   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229180.641382554
	
	I0416 00:59:40.669802   61500 fix.go:216] guest clock: 1713229180.641382554
	I0416 00:59:40.669811   61500 fix.go:229] Guest: 2024-04-16 00:59:40.641382554 +0000 UTC Remote: 2024-04-16 00:59:40.56273146 +0000 UTC m=+293.069651959 (delta=78.651094ms)
	I0416 00:59:40.669839   61500 fix.go:200] guest clock delta is within tolerance: 78.651094ms
	I0416 00:59:40.669857   61500 start.go:83] releasing machines lock for "no-preload-572602", held for 19.591740017s
	I0416 00:59:40.669883   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.670163   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:40.672800   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.673187   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.673234   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.673386   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.673841   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.673993   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.674067   61500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:59:40.674115   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.674155   61500 ssh_runner.go:195] Run: cat /version.json
	I0416 00:59:40.674174   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.676617   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.676776   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.677006   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.677030   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.677126   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.677277   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.677299   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.677336   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.677499   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.677511   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.677635   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.677768   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.678072   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.678224   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.787049   61500 ssh_runner.go:195] Run: systemctl --version
	I0416 00:59:40.793568   61500 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:59:40.941445   61500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 00:59:40.949062   61500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:59:40.949177   61500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:59:40.966425   61500 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 00:59:40.966454   61500 start.go:494] detecting cgroup driver to use...
	I0416 00:59:40.966525   61500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:59:40.985126   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:59:40.999931   61500 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:59:41.000004   61500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:59:41.015597   61500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:59:41.030610   61500 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:59:41.151240   61500 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 00:59:41.312384   61500 docker.go:233] disabling docker service ...
	I0416 00:59:41.312464   61500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 00:59:41.329263   61500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 00:59:41.345192   61500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 00:59:41.463330   61500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 00:59:41.595259   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 00:59:41.610495   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 00:59:41.632527   61500 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 00:59:41.632580   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.644625   61500 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 00:59:41.644723   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.656056   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.667069   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.682783   61500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 00:59:41.694760   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.712505   61500 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.737338   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.747518   61500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 00:59:41.756586   61500 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 00:59:41.756656   61500 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 00:59:41.769230   61500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 00:59:41.778424   61500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:59:41.894135   61500 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 00:59:42.039732   61500 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 00:59:42.039812   61500 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 00:59:42.044505   61500 start.go:562] Will wait 60s for crictl version
	I0416 00:59:42.044551   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.049632   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 00:59:42.106886   61500 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 00:59:42.106981   61500 ssh_runner.go:195] Run: crio --version
	I0416 00:59:42.137092   61500 ssh_runner.go:195] Run: crio --version
	I0416 00:59:42.170036   61500 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0416 00:59:42.171395   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:42.174790   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:42.175217   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:42.175250   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:42.175506   61500 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 00:59:42.180987   61500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:59:42.198472   61500 kubeadm.go:877] updating cluster {Name:no-preload-572602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.2 ClusterName:no-preload-572602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 00:59:42.198595   61500 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0416 00:59:42.198639   61500 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:59:42.236057   61500 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.2". assuming images are not preloaded.
	I0416 00:59:42.236084   61500 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.2 registry.k8s.io/kube-controller-manager:v1.30.0-rc.2 registry.k8s.io/kube-scheduler:v1.30.0-rc.2 registry.k8s.io/kube-proxy:v1.30.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 00:59:42.236146   61500 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:42.236166   61500 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.236180   61500 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.236182   61500 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.236212   61500 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.236238   61500 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0416 00:59:42.236287   61500 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.236164   61500 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.237740   61500 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.237756   61500 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0416 00:59:42.237763   61500 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.237779   61500 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.237740   61500 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.237848   61500 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.237847   61500 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:42.238087   61500 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.410682   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.445824   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.446874   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0416 00:59:42.448854   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.449450   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.452121   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.458966   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.480556   61500 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.2" does not exist at hash "461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6" in container runtime
	I0416 00:59:42.480608   61500 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.480670   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.176660   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.177053   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.177084   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.177031   63071 retry.go:31] will retry after 268.951362ms: waiting for machine to come up
	I0416 00:59:42.447724   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.448132   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.448159   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.448097   63071 retry.go:31] will retry after 293.793417ms: waiting for machine to come up
	I0416 00:59:42.743375   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.743845   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.743874   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.743801   63071 retry.go:31] will retry after 494.163372ms: waiting for machine to come up
	I0416 00:59:43.239314   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:43.239761   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:43.239790   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:43.239708   63071 retry.go:31] will retry after 698.851999ms: waiting for machine to come up
	I0416 00:59:43.939998   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:43.940577   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:43.940607   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:43.940535   63071 retry.go:31] will retry after 764.693004ms: waiting for machine to come up
	I0416 00:59:44.706335   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:44.706673   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:44.706724   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:44.706626   63071 retry.go:31] will retry after 874.082115ms: waiting for machine to come up
	I0416 00:59:45.581896   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:45.582331   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:45.582361   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:45.582280   63071 retry.go:31] will retry after 966.259345ms: waiting for machine to come up
	I0416 00:59:46.550671   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:46.551111   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:46.551140   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:46.551062   63071 retry.go:31] will retry after 1.191034468s: waiting for machine to come up
	I0416 00:59:42.583284   61500 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0416 00:59:42.583332   61500 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.583377   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.724785   61500 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2" does not exist at hash "ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b" in container runtime
	I0416 00:59:42.724827   61500 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.724878   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.724899   61500 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0416 00:59:42.724938   61500 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.724938   61500 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.2" does not exist at hash "35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e" in container runtime
	I0416 00:59:42.724964   61500 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.724979   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.724993   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.725019   61500 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.2" does not exist at hash "65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1" in container runtime
	I0416 00:59:42.725051   61500 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.725063   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.725088   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.725102   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.739346   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.739764   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.787888   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.787977   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.788024   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.788084   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.815167   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0416 00:59:42.815274   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0416 00:59:42.845627   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0416 00:59:42.845741   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0416 00:59:42.848065   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:42.848134   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:42.880543   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:42.880557   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2 (exists)
	I0416 00:59:42.880575   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.880628   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.880648   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:42.907207   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0416 00:59:42.907245   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0416 00:59:42.907269   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.2
	I0416 00:59:42.907295   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2 (exists)
	I0416 00:59:42.907334   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2 (exists)
	I0416 00:59:42.907350   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 00:59:43.138705   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:44.951278   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2: (2.07061835s)
	I0416 00:59:44.951295   61500 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2: (2.04392036s)
	I0416 00:59:44.951348   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2 (exists)
	I0416 00:59:44.951309   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2 from cache
	I0416 00:59:44.951364   61500 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.812619758s)
	I0416 00:59:44.951410   61500 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0416 00:59:44.951448   61500 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:44.951374   61500 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0416 00:59:44.951506   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:44.951508   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0416 00:59:47.744187   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:47.744683   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:47.744712   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:47.744637   63071 retry.go:31] will retry after 2.263605663s: waiting for machine to come up
	I0416 00:59:50.011136   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:50.011605   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:50.011632   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:50.011566   63071 retry.go:31] will retry after 2.648982849s: waiting for machine to come up
	I0416 00:59:48.656623   61500 ssh_runner.go:235] Completed: which crictl: (3.705085257s)
	I0416 00:59:48.656705   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:48.656715   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.705109475s)
	I0416 00:59:48.656743   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0416 00:59:48.656769   61500 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0416 00:59:48.656798   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0416 00:59:50.560030   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.903209359s)
	I0416 00:59:50.560071   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0416 00:59:50.560085   61500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.90335887s)
	I0416 00:59:50.560096   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:50.560148   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:50.560151   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0416 00:59:50.560309   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0416 00:59:52.662443   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:52.662852   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:52.662883   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:52.662815   63071 retry.go:31] will retry after 2.183508059s: waiting for machine to come up
	I0416 00:59:54.849225   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:54.849701   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:54.849734   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:54.849649   63071 retry.go:31] will retry after 3.201585234s: waiting for machine to come up
	I0416 00:59:52.739620   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2: (2.179436189s)
	I0416 00:59:52.739658   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2 from cache
	I0416 00:59:52.739688   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:52.739697   61500 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.179365348s)
	I0416 00:59:52.739724   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0416 00:59:52.739747   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:55.098350   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2: (2.358579586s)
	I0416 00:59:55.098381   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2 from cache
	I0416 00:59:55.098408   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 00:59:55.098454   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 00:59:57.166586   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2: (2.068105529s)
	I0416 00:59:57.166615   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.2 from cache
	I0416 00:59:57.166644   61500 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0416 00:59:57.166697   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0416 00:59:59.394339   62747 start.go:364] duration metric: took 1m16.499681915s to acquireMachinesLock for "embed-certs-617092"
	I0416 00:59:59.394389   62747 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:59:59.394412   62747 fix.go:54] fixHost starting: 
	I0416 00:59:59.394834   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:59:59.394896   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:59:59.414712   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38637
	I0416 00:59:59.415464   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:59:59.416123   62747 main.go:141] libmachine: Using API Version  1
	I0416 00:59:59.416150   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:59:59.416436   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:59:59.416623   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 00:59:59.416786   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 00:59:59.418413   62747 fix.go:112] recreateIfNeeded on embed-certs-617092: state=Stopped err=<nil>
	I0416 00:59:59.418449   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	W0416 00:59:59.418609   62747 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:59:59.420560   62747 out.go:177] * Restarting existing kvm2 VM for "embed-certs-617092" ...
	I0416 00:59:58.052613   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.053048   62139 main.go:141] libmachine: (old-k8s-version-800769) Found IP for machine: 192.168.83.98
	I0416 00:59:58.053073   62139 main.go:141] libmachine: (old-k8s-version-800769) Reserving static IP address...
	I0416 00:59:58.053089   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has current primary IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.053517   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "old-k8s-version-800769", mac: "52:54:00:a1:ad:da", ip: "192.168.83.98"} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.053549   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | skip adding static IP to network mk-old-k8s-version-800769 - found existing host DHCP lease matching {name: "old-k8s-version-800769", mac: "52:54:00:a1:ad:da", ip: "192.168.83.98"}
	I0416 00:59:58.053569   62139 main.go:141] libmachine: (old-k8s-version-800769) Reserved static IP address: 192.168.83.98
	I0416 00:59:58.053587   62139 main.go:141] libmachine: (old-k8s-version-800769) Waiting for SSH to be available...
	I0416 00:59:58.053602   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Getting to WaitForSSH function...
	I0416 00:59:58.055598   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.055907   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.055941   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.056038   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Using SSH client type: external
	I0416 00:59:58.056088   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa (-rw-------)
	I0416 00:59:58.056132   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.98 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 00:59:58.056149   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | About to run SSH command:
	I0416 00:59:58.056162   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | exit 0
	I0416 00:59:58.185675   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | SSH cmd err, output: <nil>: 
	I0416 00:59:58.186055   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetConfigRaw
	I0416 00:59:58.186802   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:58.189772   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.190219   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.190257   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.190448   62139 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/config.json ...
	I0416 00:59:58.190666   62139 machine.go:94] provisionDockerMachine start ...
	I0416 00:59:58.190685   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:58.190902   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.193570   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.193954   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.193982   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.194139   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.194337   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.194492   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.194636   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.194786   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.195041   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.195056   62139 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:59:58.321824   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 00:59:58.321857   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.322146   62139 buildroot.go:166] provisioning hostname "old-k8s-version-800769"
	I0416 00:59:58.322175   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.322381   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.324941   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.325288   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.325316   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.325423   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.325613   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.325776   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.325936   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.326109   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.326322   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.326339   62139 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-800769 && echo "old-k8s-version-800769" | sudo tee /etc/hostname
	I0416 00:59:58.455194   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-800769
	
	I0416 00:59:58.455236   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.458021   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.458423   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.458458   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.458662   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.458848   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.459013   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.459162   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.459353   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.459507   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.459524   62139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-800769' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-800769/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-800769' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:59:58.587318   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:59:58.587351   62139 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:59:58.587391   62139 buildroot.go:174] setting up certificates
	I0416 00:59:58.587400   62139 provision.go:84] configureAuth start
	I0416 00:59:58.587413   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.587686   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:58.590415   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.590739   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.590778   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.590880   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.593282   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.593728   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.593759   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.593931   62139 provision.go:143] copyHostCerts
	I0416 00:59:58.593988   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:59:58.594007   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:59:58.594079   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:59:58.594213   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:59:58.594222   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:59:58.594263   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:59:58.594372   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:59:58.594383   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:59:58.594408   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:59:58.594470   62139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-800769 san=[127.0.0.1 192.168.83.98 localhost minikube old-k8s-version-800769]
	I0416 00:59:58.692127   62139 provision.go:177] copyRemoteCerts
	I0416 00:59:58.692197   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:59:58.692232   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.694858   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.695231   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.695278   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.695507   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.695693   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.695852   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.695994   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:58.783458   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:59:58.811124   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0416 00:59:58.836495   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 00:59:58.862044   62139 provision.go:87] duration metric: took 274.632117ms to configureAuth
	I0416 00:59:58.862068   62139 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:59:58.862278   62139 config.go:182] Loaded profile config "old-k8s-version-800769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0416 00:59:58.862361   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.865352   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.865795   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.865829   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.866043   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.866228   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.866435   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.866625   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.866805   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.867008   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.867026   62139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:59:59.143874   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:59:59.143900   62139 machine.go:97] duration metric: took 953.218972ms to provisionDockerMachine
	I0416 00:59:59.143914   62139 start.go:293] postStartSetup for "old-k8s-version-800769" (driver="kvm2")
	I0416 00:59:59.143927   62139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:59:59.143972   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.144277   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:59:59.144302   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.147021   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.147355   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.147385   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.147649   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.147871   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.148036   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.148174   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.236981   62139 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:59:59.241388   62139 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:59:59.241411   62139 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:59:59.241469   62139 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:59:59.241534   62139 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:59:59.241619   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:59:59.251688   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:59:59.275189   62139 start.go:296] duration metric: took 131.262042ms for postStartSetup
	I0416 00:59:59.275227   62139 fix.go:56] duration metric: took 18.605201288s for fixHost
	I0416 00:59:59.275250   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.277804   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.278153   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.278186   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.278341   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.278581   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.278741   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.278908   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.279068   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:59.279233   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:59.279243   62139 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 00:59:59.394108   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229199.360202150
	
	I0416 00:59:59.394141   62139 fix.go:216] guest clock: 1713229199.360202150
	I0416 00:59:59.394152   62139 fix.go:229] Guest: 2024-04-16 00:59:59.36020215 +0000 UTC Remote: 2024-04-16 00:59:59.27523174 +0000 UTC m=+217.222314955 (delta=84.97041ms)
	I0416 00:59:59.394211   62139 fix.go:200] guest clock delta is within tolerance: 84.97041ms
	I0416 00:59:59.394218   62139 start.go:83] releasing machines lock for "old-k8s-version-800769", held for 18.724230851s
	I0416 00:59:59.394252   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.394554   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:59.397241   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.397670   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.397703   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.397897   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398460   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398650   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398740   62139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:59:59.398782   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.399049   62139 ssh_runner.go:195] Run: cat /version.json
	I0416 00:59:59.399072   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.401397   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401656   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401802   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.401825   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401964   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.402017   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.402089   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.402173   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.402248   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.402320   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.402376   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.402430   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.402577   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.402638   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.481834   62139 ssh_runner.go:195] Run: systemctl --version
	I0416 00:59:59.516372   62139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:59:59.666722   62139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 00:59:59.674165   62139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:59:59.674226   62139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:59:59.695545   62139 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 00:59:59.695573   62139 start.go:494] detecting cgroup driver to use...
	I0416 00:59:59.695646   62139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:59:59.715091   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:59:59.732004   62139 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:59:59.732060   62139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:59:59.753217   62139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:59:59.768513   62139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:59:59.898693   62139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:00:00.066535   62139 docker.go:233] disabling docker service ...
	I0416 01:00:00.066607   62139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:00:00.084512   62139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:00:00.097714   62139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:00:00.232901   62139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:00:00.378379   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:00:00.395191   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:00:00.416631   62139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0416 01:00:00.416695   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.428712   62139 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:00:00.428774   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.442687   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.454631   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.466151   62139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:00:00.478459   62139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:00:00.489957   62139 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:00:00.490035   62139 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:00:00.506087   62139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:00:00.518100   62139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:00.676317   62139 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:00:00.869766   62139 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:00:00.869855   62139 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:00:00.875363   62139 start.go:562] Will wait 60s for crictl version
	I0416 01:00:00.875424   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:00.880947   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:00:00.924780   62139 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:00:00.924852   62139 ssh_runner.go:195] Run: crio --version
	I0416 01:00:00.958390   62139 ssh_runner.go:195] Run: crio --version
	I0416 01:00:00.993114   62139 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0416 01:00:00.994513   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 01:00:00.997571   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 01:00:00.998032   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 01:00:00.998065   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 01:00:00.998273   62139 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0416 01:00:01.002750   62139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:01.015709   62139 kubeadm.go:877] updating cluster {Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.98 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:00:01.015810   62139 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 01:00:01.015853   62139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:01.063257   62139 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 01:00:01.063331   62139 ssh_runner.go:195] Run: which lz4
	I0416 01:00:01.067973   62139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:00:01.072369   62139 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:00:01.072400   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0416 00:59:57.817013   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0416 00:59:57.817060   61500 cache_images.go:123] Successfully loaded all cached images
	I0416 00:59:57.817073   61500 cache_images.go:92] duration metric: took 15.580967615s to LoadCachedImages
	I0416 00:59:57.817087   61500 kubeadm.go:928] updating node { 192.168.39.121 8443 v1.30.0-rc.2 crio true true} ...
	I0416 00:59:57.817241   61500 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-572602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:no-preload-572602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 00:59:57.817324   61500 ssh_runner.go:195] Run: crio config
	I0416 00:59:57.866116   61500 cni.go:84] Creating CNI manager for ""
	I0416 00:59:57.866140   61500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:59:57.866154   61500 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 00:59:57.866189   61500 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.121 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-572602 NodeName:no-preload-572602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.121 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 00:59:57.866325   61500 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.121
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-572602"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.121
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.121"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 00:59:57.866390   61500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0416 00:59:57.876619   61500 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 00:59:57.876689   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 00:59:57.886472   61500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0416 00:59:57.903172   61500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0416 00:59:57.919531   61500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0416 00:59:57.936394   61500 ssh_runner.go:195] Run: grep 192.168.39.121	control-plane.minikube.internal$ /etc/hosts
	I0416 00:59:57.940161   61500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.121	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:59:57.951997   61500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:59:58.089553   61500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 00:59:58.117870   61500 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602 for IP: 192.168.39.121
	I0416 00:59:58.117926   61500 certs.go:194] generating shared ca certs ...
	I0416 00:59:58.117949   61500 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:59:58.118136   61500 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 00:59:58.118199   61500 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 00:59:58.118216   61500 certs.go:256] generating profile certs ...
	I0416 00:59:58.118351   61500 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/client.key
	I0416 00:59:58.118446   61500 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/apiserver.key.a3b1330f
	I0416 00:59:58.118505   61500 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/proxy-client.key
	I0416 00:59:58.118664   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 00:59:58.118708   61500 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 00:59:58.118721   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 00:59:58.118756   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 00:59:58.118786   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 00:59:58.118814   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 00:59:58.118874   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:59:58.119738   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 00:59:58.150797   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 00:59:58.181693   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 00:59:58.231332   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 00:59:58.276528   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 00:59:58.301000   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 00:59:58.326090   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 00:59:58.350254   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 00:59:58.377597   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 00:59:58.401548   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 00:59:58.425237   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 00:59:58.449748   61500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 00:59:58.468346   61500 ssh_runner.go:195] Run: openssl version
	I0416 00:59:58.474164   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 00:59:58.485674   61500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 00:59:58.490136   61500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 00:59:58.490203   61500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 00:59:58.495781   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 00:59:58.507047   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 00:59:58.518007   61500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 00:59:58.522317   61500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 00:59:58.522364   61500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 00:59:58.527809   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 00:59:58.538579   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 00:59:58.549188   61500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:59:58.553688   61500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:59:58.553732   61500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:59:58.559175   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 00:59:58.570142   61500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 00:59:58.574657   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 00:59:58.580560   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 00:59:58.586319   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 00:59:58.593938   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 00:59:58.599808   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 00:59:58.605583   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 00:59:58.611301   61500 kubeadm.go:391] StartCluster: {Name:no-preload-572602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.2 ClusterName:no-preload-572602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:59:58.611385   61500 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 00:59:58.611439   61500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:59:58.655244   61500 cri.go:89] found id: ""
	I0416 00:59:58.655315   61500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 00:59:58.667067   61500 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 00:59:58.667082   61500 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 00:59:58.667088   61500 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 00:59:58.667128   61500 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 00:59:58.678615   61500 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:59:58.680097   61500 kubeconfig.go:125] found "no-preload-572602" server: "https://192.168.39.121:8443"
	I0416 00:59:58.683135   61500 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 00:59:58.695291   61500 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.121
	I0416 00:59:58.695323   61500 kubeadm.go:1154] stopping kube-system containers ...
	I0416 00:59:58.695337   61500 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 00:59:58.695380   61500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:59:58.731743   61500 cri.go:89] found id: ""
	I0416 00:59:58.731832   61500 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 00:59:58.748125   61500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 00:59:58.757845   61500 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 00:59:58.757865   61500 kubeadm.go:156] found existing configuration files:
	
	I0416 00:59:58.757918   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 00:59:58.766993   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 00:59:58.767036   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 00:59:58.776831   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 00:59:58.786420   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 00:59:58.786467   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 00:59:58.796067   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 00:59:58.805385   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 00:59:58.805511   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 00:59:58.815313   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 00:59:58.826551   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 00:59:58.826603   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 00:59:58.836652   61500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 00:59:58.848671   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 00:59:58.967511   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.416009   61500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.44846758s)
	I0416 01:00:00.416041   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.657784   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.741694   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.876550   61500 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:00.876630   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:01.377586   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:01.877647   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:01.950167   61500 api_server.go:72] duration metric: took 1.073614574s to wait for apiserver process to appear ...
	I0416 01:00:01.950201   61500 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:00:01.950224   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:01.950854   61500 api_server.go:269] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
	I0416 01:00:02.450437   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 00:59:59.421878   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Start
	I0416 00:59:59.422036   62747 main.go:141] libmachine: (embed-certs-617092) Ensuring networks are active...
	I0416 00:59:59.422646   62747 main.go:141] libmachine: (embed-certs-617092) Ensuring network default is active
	I0416 00:59:59.422931   62747 main.go:141] libmachine: (embed-certs-617092) Ensuring network mk-embed-certs-617092 is active
	I0416 00:59:59.423360   62747 main.go:141] libmachine: (embed-certs-617092) Getting domain xml...
	I0416 00:59:59.424005   62747 main.go:141] libmachine: (embed-certs-617092) Creating domain...
	I0416 01:00:00.682582   62747 main.go:141] libmachine: (embed-certs-617092) Waiting to get IP...
	I0416 01:00:00.683684   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:00.684222   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:00.684277   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:00.684198   63257 retry.go:31] will retry after 196.582767ms: waiting for machine to come up
	I0416 01:00:00.882954   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:00.883544   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:00.883577   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:00.883482   63257 retry.go:31] will retry after 309.274692ms: waiting for machine to come up
	I0416 01:00:01.193848   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:01.194286   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:01.194325   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:01.194234   63257 retry.go:31] will retry after 379.332728ms: waiting for machine to come up
	I0416 01:00:01.574938   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:01.575371   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:01.575400   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:01.575318   63257 retry.go:31] will retry after 445.10423ms: waiting for machine to come up
	I0416 01:00:02.022081   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:02.022612   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:02.022636   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:02.022570   63257 retry.go:31] will retry after 692.025501ms: waiting for machine to come up
	I0416 01:00:02.716548   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:02.717032   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:02.717061   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:02.716992   63257 retry.go:31] will retry after 735.44304ms: waiting for machine to come up
	I0416 01:00:02.891638   62139 crio.go:462] duration metric: took 1.823700483s to copy over tarball
	I0416 01:00:02.891723   62139 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:00:06.137253   62139 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.245498092s)
	I0416 01:00:06.137283   62139 crio.go:469] duration metric: took 3.245614896s to extract the tarball
	I0416 01:00:06.137292   62139 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:00:06.181260   62139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:06.224646   62139 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 01:00:06.224682   62139 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 01:00:06.224762   62139 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:06.224815   62139 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.224851   62139 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.224821   62139 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.224768   62139 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.224797   62139 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.225121   62139 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0416 01:00:06.224797   62139 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.226485   62139 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.226505   62139 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0416 01:00:06.226516   62139 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.226580   62139 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.226729   62139 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.227296   62139 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:06.227311   62139 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.227315   62139 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.397101   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.431142   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0416 01:00:06.433152   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.433876   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.434844   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.441478   62139 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0416 01:00:06.441524   62139 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.441558   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.450391   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.506375   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.540080   62139 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0416 01:00:06.540250   62139 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0416 01:00:06.540121   62139 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0416 01:00:06.540299   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.540305   62139 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.540343   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613287   62139 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0416 01:00:06.613305   62139 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0416 01:00:06.613334   62139 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.613339   62139 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.613381   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613381   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613490   62139 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0416 01:00:06.613522   62139 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.613569   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613384   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.613620   62139 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0416 01:00:06.613657   62139 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.613716   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0416 01:00:06.613722   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613665   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.619153   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.638065   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.734018   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0416 01:00:06.734134   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.749273   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0416 01:00:06.750536   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0416 01:00:06.750576   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.750655   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0416 01:00:06.750594   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0416 01:00:06.790321   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0416 01:00:06.803564   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0416 01:00:07.060494   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:05.541219   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:05.541261   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:05.541279   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:05.585252   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:05.585284   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:05.950871   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:05.970682   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:05.970725   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:06.450780   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:06.457855   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:06.457888   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:06.950519   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:06.955476   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:06.955505   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:07.451155   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:07.463138   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:07.463172   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:03.453566   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:03.454098   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:03.454131   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:03.454033   63257 retry.go:31] will retry after 838.732671ms: waiting for machine to come up
	I0416 01:00:04.294692   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:04.295209   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:04.295237   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:04.295158   63257 retry.go:31] will retry after 1.302969512s: waiting for machine to come up
	I0416 01:00:05.599886   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:05.600406   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:05.600435   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:05.600378   63257 retry.go:31] will retry after 1.199501225s: waiting for machine to come up
	I0416 01:00:06.801741   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:06.802134   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:06.802153   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:06.802107   63257 retry.go:31] will retry after 1.631018672s: waiting for machine to come up
	I0416 01:00:07.951263   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:07.961911   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:07.961946   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:08.450413   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:08.458651   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:08.458683   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:08.950297   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:08.955847   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 200:
	ok
	I0416 01:00:08.964393   61500 api_server.go:141] control plane version: v1.30.0-rc.2
	I0416 01:00:08.964422   61500 api_server.go:131] duration metric: took 7.01421218s to wait for apiserver health ...
	I0416 01:00:08.964432   61500 cni.go:84] Creating CNI manager for ""
	I0416 01:00:08.964445   61500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:08.966249   61500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:00:07.207951   62139 cache_images.go:92] duration metric: took 983.249797ms to LoadCachedImages
	W0416 01:00:07.286619   62139 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0416 01:00:07.286654   62139 kubeadm.go:928] updating node { 192.168.83.98 8443 v1.20.0 crio true true} ...
	I0416 01:00:07.286815   62139 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-800769 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.98
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 01:00:07.286916   62139 ssh_runner.go:195] Run: crio config
	I0416 01:00:07.338016   62139 cni.go:84] Creating CNI manager for ""
	I0416 01:00:07.338038   62139 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:07.338049   62139 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:00:07.338072   62139 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.98 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-800769 NodeName:old-k8s-version-800769 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.98"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.98 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0416 01:00:07.338207   62139 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.98
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-800769"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.98
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.98"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:00:07.338273   62139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0416 01:00:07.349347   62139 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:00:07.349432   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:00:07.361389   62139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0416 01:00:07.379714   62139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:00:07.397953   62139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0416 01:00:07.416901   62139 ssh_runner.go:195] Run: grep 192.168.83.98	control-plane.minikube.internal$ /etc/hosts
	I0416 01:00:07.420904   62139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.98	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:07.436685   62139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:07.567945   62139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:00:07.587829   62139 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769 for IP: 192.168.83.98
	I0416 01:00:07.587858   62139 certs.go:194] generating shared ca certs ...
	I0416 01:00:07.587880   62139 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:07.588087   62139 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:00:07.588155   62139 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:00:07.588171   62139 certs.go:256] generating profile certs ...
	I0416 01:00:07.606683   62139 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.key
	I0416 01:00:07.606823   62139 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key.efc35655
	I0416 01:00:07.606872   62139 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.key
	I0416 01:00:07.607040   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:00:07.607087   62139 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:00:07.607114   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:00:07.607172   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:00:07.607204   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:00:07.607234   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:00:07.607283   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:07.608127   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:00:07.658868   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:00:07.703378   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:00:07.743203   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:00:07.787335   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0416 01:00:07.823630   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 01:00:07.854198   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:00:07.881813   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 01:00:07.909698   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:00:07.935341   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:00:07.963102   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:00:07.989657   62139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:00:08.009203   62139 ssh_runner.go:195] Run: openssl version
	I0416 01:00:08.015677   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:00:08.027077   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.032096   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.032179   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.038672   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:00:08.054256   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:00:08.065287   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.069846   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.069907   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.075899   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:00:08.087272   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:00:08.098494   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.103168   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.103246   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.109202   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:00:08.120143   62139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:00:08.125027   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 01:00:08.131716   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 01:00:08.138024   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 01:00:08.144291   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 01:00:08.150741   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 01:00:08.156931   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 01:00:08.163147   62139 kubeadm.go:391] StartCluster: {Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.98 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:00:08.163254   62139 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:00:08.163298   62139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:08.201923   62139 cri.go:89] found id: ""
	I0416 01:00:08.202000   62139 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 01:00:08.212441   62139 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 01:00:08.212462   62139 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 01:00:08.212467   62139 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 01:00:08.212514   62139 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 01:00:08.222702   62139 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 01:00:08.223670   62139 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-800769" does not appear in /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:00:08.224332   62139 kubeconfig.go:62] /home/jenkins/minikube-integration/18647-7542/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-800769" cluster setting kubeconfig missing "old-k8s-version-800769" context setting]
	I0416 01:00:08.225340   62139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:08.343775   62139 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 01:00:08.355942   62139 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.98
	I0416 01:00:08.355986   62139 kubeadm.go:1154] stopping kube-system containers ...
	I0416 01:00:08.356007   62139 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 01:00:08.356081   62139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:08.398894   62139 cri.go:89] found id: ""
	I0416 01:00:08.398976   62139 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 01:00:08.416343   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:00:08.426901   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:00:08.426926   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:00:08.426981   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:00:08.437870   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:00:08.437942   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:00:08.452256   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:00:08.466375   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:00:08.466447   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:00:08.477246   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:00:08.487547   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:00:08.487615   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:00:08.504171   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:00:08.515265   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:00:08.515332   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:00:08.525186   62139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:00:08.535381   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:08.657456   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.504421   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.781478   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.950913   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:10.044772   62139 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:10.044871   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:10.545002   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:11.045664   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:11.545083   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:12.045593   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:08.967643   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:00:08.986743   61500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:00:09.011229   61500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:00:09.022810   61500 system_pods.go:59] 8 kube-system pods found
	I0416 01:00:09.022858   61500 system_pods.go:61] "coredns-7db6d8ff4d-xxlkb" [b1ec79ef-e16c-4feb-94ec-5dc85645867f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:00:09.022869   61500 system_pods.go:61] "etcd-no-preload-572602" [f29f3efe-bee4-4d8c-9d49-68008ad50a9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 01:00:09.022881   61500 system_pods.go:61] "kube-apiserver-no-preload-572602" [dd740f94-bfd5-4043-9522-5b8a932690cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 01:00:09.022893   61500 system_pods.go:61] "kube-controller-manager-no-preload-572602" [2778e1a7-a7e3-4ad6-a265-552e78b6b195] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 01:00:09.022901   61500 system_pods.go:61] "kube-proxy-v9fmp" [70ab6236-c758-48eb-85a7-8f7721730a20] Running
	I0416 01:00:09.022908   61500 system_pods.go:61] "kube-scheduler-no-preload-572602" [bb8650bb-657e-49f1-9cee-4437879be44d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 01:00:09.022919   61500 system_pods.go:61] "metrics-server-569cc877fc-llsfr" [ad421803-6236-44df-a15d-c890a3a10dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:00:09.022925   61500 system_pods.go:61] "storage-provisioner" [ec2dd6e2-33db-4888-8945-9879821c92fc] Running
	I0416 01:00:09.022934   61500 system_pods.go:74] duration metric: took 11.661356ms to wait for pod list to return data ...
	I0416 01:00:09.022950   61500 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:00:09.027411   61500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:00:09.027445   61500 node_conditions.go:123] node cpu capacity is 2
	I0416 01:00:09.027459   61500 node_conditions.go:105] duration metric: took 4.503043ms to run NodePressure ...
	I0416 01:00:09.027480   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.307796   61500 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 01:00:09.313534   61500 kubeadm.go:733] kubelet initialised
	I0416 01:00:09.313567   61500 kubeadm.go:734] duration metric: took 5.734401ms waiting for restarted kubelet to initialise ...
	I0416 01:00:09.313580   61500 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:00:09.320900   61500 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.327569   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.327606   61500 pod_ready.go:81] duration metric: took 6.67541ms for pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.327621   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.327633   61500 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.333714   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "etcd-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.333746   61500 pod_ready.go:81] duration metric: took 6.094825ms for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.333759   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "etcd-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.333768   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.338980   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "kube-apiserver-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.339006   61500 pod_ready.go:81] duration metric: took 5.230122ms for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.339017   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "kube-apiserver-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.339033   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.415418   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.415450   61500 pod_ready.go:81] duration metric: took 76.40508ms for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.415462   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.415470   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v9fmp" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.815907   61500 pod_ready.go:92] pod "kube-proxy-v9fmp" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:09.815945   61500 pod_ready.go:81] duration metric: took 400.462786ms for pod "kube-proxy-v9fmp" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.815959   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:11.824269   61500 pod_ready.go:102] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:08.434523   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:08.435039   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:08.435067   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:08.434988   63257 retry.go:31] will retry after 2.819136125s: waiting for machine to come up
	I0416 01:00:11.256238   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:11.256704   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:11.256722   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:11.256664   63257 retry.go:31] will retry after 3.074881299s: waiting for machine to come up
	I0416 01:00:12.545696   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:13.045935   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:13.545810   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.045682   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.545524   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:15.045110   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:15.545792   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:16.045843   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:16.545684   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:17.045401   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.322436   61500 pod_ready.go:102] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:16.821648   61500 pod_ready.go:102] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:14.335004   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:14.335391   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:14.335437   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:14.335343   63257 retry.go:31] will retry after 4.248377683s: waiting for machine to come up
	I0416 01:00:20.014452   61267 start.go:364] duration metric: took 53.932663013s to acquireMachinesLock for "default-k8s-diff-port-653942"
	I0416 01:00:20.014507   61267 start.go:96] Skipping create...Using existing machine configuration
	I0416 01:00:20.014515   61267 fix.go:54] fixHost starting: 
	I0416 01:00:20.014929   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:00:20.014964   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:00:20.033099   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0416 01:00:20.033554   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:00:20.034077   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:00:20.034104   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:00:20.034458   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:00:20.034665   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:20.034812   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:00:20.036559   61267 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653942: state=Stopped err=<nil>
	I0416 01:00:20.036588   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	W0416 01:00:20.036751   61267 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 01:00:20.038774   61267 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-653942" ...
	I0416 01:00:18.588875   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.589320   62747 main.go:141] libmachine: (embed-certs-617092) Found IP for machine: 192.168.61.225
	I0416 01:00:18.589347   62747 main.go:141] libmachine: (embed-certs-617092) Reserving static IP address...
	I0416 01:00:18.589362   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has current primary IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.589699   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "embed-certs-617092", mac: "52:54:00:86:1b:62", ip: "192.168.61.225"} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.589728   62747 main.go:141] libmachine: (embed-certs-617092) Reserved static IP address: 192.168.61.225
	I0416 01:00:18.589752   62747 main.go:141] libmachine: (embed-certs-617092) DBG | skip adding static IP to network mk-embed-certs-617092 - found existing host DHCP lease matching {name: "embed-certs-617092", mac: "52:54:00:86:1b:62", ip: "192.168.61.225"}
	I0416 01:00:18.589771   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Getting to WaitForSSH function...
	I0416 01:00:18.589808   62747 main.go:141] libmachine: (embed-certs-617092) Waiting for SSH to be available...
	I0416 01:00:18.591590   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.591858   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.591885   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.591995   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Using SSH client type: external
	I0416 01:00:18.592027   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa (-rw-------)
	I0416 01:00:18.592058   62747 main.go:141] libmachine: (embed-certs-617092) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 01:00:18.592072   62747 main.go:141] libmachine: (embed-certs-617092) DBG | About to run SSH command:
	I0416 01:00:18.592084   62747 main.go:141] libmachine: (embed-certs-617092) DBG | exit 0
	I0416 01:00:18.717336   62747 main.go:141] libmachine: (embed-certs-617092) DBG | SSH cmd err, output: <nil>: 
	I0416 01:00:18.717759   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetConfigRaw
	I0416 01:00:18.718347   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:18.720640   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.721040   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.721086   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.721300   62747 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/config.json ...
	I0416 01:00:18.721481   62747 machine.go:94] provisionDockerMachine start ...
	I0416 01:00:18.721501   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:18.721700   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:18.723610   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.723924   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.723946   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.724126   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:18.724345   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.724512   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.724616   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:18.724737   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:18.725049   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:18.725199   62747 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 01:00:18.834014   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 01:00:18.834041   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetMachineName
	I0416 01:00:18.834257   62747 buildroot.go:166] provisioning hostname "embed-certs-617092"
	I0416 01:00:18.834280   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetMachineName
	I0416 01:00:18.834495   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:18.836959   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.837282   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.837333   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.837417   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:18.837588   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.837755   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.837962   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:18.838152   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:18.838324   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:18.838342   62747 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-617092 && echo "embed-certs-617092" | sudo tee /etc/hostname
	I0416 01:00:18.959828   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-617092
	
	I0416 01:00:18.959865   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:18.962661   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.962997   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.963029   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.963174   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:18.963351   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.963488   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.963609   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:18.963747   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:18.963949   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:18.963967   62747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-617092' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-617092/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-617092' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 01:00:19.079309   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 01:00:19.079341   62747 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 01:00:19.079400   62747 buildroot.go:174] setting up certificates
	I0416 01:00:19.079409   62747 provision.go:84] configureAuth start
	I0416 01:00:19.079423   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetMachineName
	I0416 01:00:19.079723   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:19.082430   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.082809   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.082838   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.082994   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.085476   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.085802   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.085825   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.085952   62747 provision.go:143] copyHostCerts
	I0416 01:00:19.086006   62747 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 01:00:19.086022   62747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 01:00:19.086077   62747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 01:00:19.086165   62747 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 01:00:19.086174   62747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 01:00:19.086193   62747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 01:00:19.086244   62747 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 01:00:19.086251   62747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 01:00:19.086270   62747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 01:00:19.086336   62747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.embed-certs-617092 san=[127.0.0.1 192.168.61.225 embed-certs-617092 localhost minikube]
	I0416 01:00:19.330622   62747 provision.go:177] copyRemoteCerts
	I0416 01:00:19.330687   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 01:00:19.330712   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.333264   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.333618   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.333645   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.333798   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.333979   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.334122   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.334235   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:19.415820   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0416 01:00:19.442985   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 01:00:19.468427   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 01:00:19.496640   62747 provision.go:87] duration metric: took 417.215523ms to configureAuth
	I0416 01:00:19.496676   62747 buildroot.go:189] setting minikube options for container-runtime
	I0416 01:00:19.496857   62747 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:00:19.496929   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.499561   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.499933   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.499981   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.500132   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.500352   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.500529   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.500671   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.500823   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:19.501026   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:19.501046   62747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 01:00:19.775400   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 01:00:19.775434   62747 machine.go:97] duration metric: took 1.053938445s to provisionDockerMachine
	I0416 01:00:19.775448   62747 start.go:293] postStartSetup for "embed-certs-617092" (driver="kvm2")
	I0416 01:00:19.775462   62747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 01:00:19.775484   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:19.775853   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 01:00:19.775886   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.778961   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.779327   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.779356   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.779510   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.779723   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.779883   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.780008   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:19.865236   62747 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 01:00:19.869769   62747 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 01:00:19.869800   62747 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 01:00:19.869865   62747 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 01:00:19.870010   62747 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 01:00:19.870111   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 01:00:19.880477   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:19.905555   62747 start.go:296] duration metric: took 130.091868ms for postStartSetup
	I0416 01:00:19.905603   62747 fix.go:56] duration metric: took 20.511199999s for fixHost
	I0416 01:00:19.905629   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.908252   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.908593   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.908631   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.908770   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.908972   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.909129   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.909284   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.909448   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:19.909607   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:19.909622   62747 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 01:00:20.014222   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229219.981820926
	
	I0416 01:00:20.014251   62747 fix.go:216] guest clock: 1713229219.981820926
	I0416 01:00:20.014262   62747 fix.go:229] Guest: 2024-04-16 01:00:19.981820926 +0000 UTC Remote: 2024-04-16 01:00:19.90560817 +0000 UTC m=+97.152894999 (delta=76.212756ms)
	I0416 01:00:20.014331   62747 fix.go:200] guest clock delta is within tolerance: 76.212756ms
	I0416 01:00:20.014339   62747 start.go:83] releasing machines lock for "embed-certs-617092", held for 20.619971021s
	I0416 01:00:20.014377   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.014676   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:20.017771   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.018204   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:20.018236   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.018446   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.018991   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.019172   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.019260   62747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 01:00:20.019299   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:20.019439   62747 ssh_runner.go:195] Run: cat /version.json
	I0416 01:00:20.019466   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:20.022283   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.022554   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.022664   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:20.022688   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.022897   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:20.023088   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:20.023150   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:20.023177   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.023281   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:20.023431   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:20.023431   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:20.023791   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:20.023942   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:20.024084   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:20.138251   62747 ssh_runner.go:195] Run: systemctl --version
	I0416 01:00:20.145100   62747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 01:00:20.299049   62747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 01:00:20.307080   62747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 01:00:20.307177   62747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 01:00:20.326056   62747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 01:00:20.326085   62747 start.go:494] detecting cgroup driver to use...
	I0416 01:00:20.326166   62747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 01:00:20.343297   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 01:00:20.358136   62747 docker.go:217] disabling cri-docker service (if available) ...
	I0416 01:00:20.358201   62747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 01:00:20.372936   62747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 01:00:20.387473   62747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 01:00:20.515721   62747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:00:20.680319   62747 docker.go:233] disabling docker service ...
	I0416 01:00:20.680413   62747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:00:20.700816   62747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:00:20.724097   62747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:00:20.885812   62747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:00:21.037890   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:00:21.055670   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:00:21.078466   62747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 01:00:21.078533   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.090135   62747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:00:21.090200   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.106122   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.123844   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.134923   62747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:00:21.153565   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.164751   62747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.184880   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.197711   62747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:00:21.208615   62747 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:00:21.208669   62747 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:00:21.223906   62747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:00:21.234873   62747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:21.405921   62747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:00:21.564833   62747 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:00:21.564918   62747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:00:21.570592   62747 start.go:562] Will wait 60s for crictl version
	I0416 01:00:21.570660   62747 ssh_runner.go:195] Run: which crictl
	I0416 01:00:21.575339   62747 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:00:21.617252   62747 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:00:21.617348   62747 ssh_runner.go:195] Run: crio --version
	I0416 01:00:21.648662   62747 ssh_runner.go:195] Run: crio --version
	I0416 01:00:21.683775   62747 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 01:00:17.544937   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:18.045282   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:18.545707   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:19.045821   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:19.545868   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.045069   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.545134   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:21.045607   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:21.545366   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:22.044998   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.040137   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Start
	I0416 01:00:20.040355   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Ensuring networks are active...
	I0416 01:00:20.041103   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Ensuring network default is active
	I0416 01:00:20.041469   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Ensuring network mk-default-k8s-diff-port-653942 is active
	I0416 01:00:20.041869   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Getting domain xml...
	I0416 01:00:20.042474   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Creating domain...
	I0416 01:00:21.359375   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting to get IP...
	I0416 01:00:21.360333   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.360736   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.360807   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:21.360726   63461 retry.go:31] will retry after 290.970715ms: waiting for machine to come up
	I0416 01:00:21.653420   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.653883   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.653916   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:21.653841   63461 retry.go:31] will retry after 361.304618ms: waiting for machine to come up
	I0416 01:00:22.016540   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.017038   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.017071   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:22.016976   63461 retry.go:31] will retry after 411.249327ms: waiting for machine to come up
	I0416 01:00:18.322778   61500 pod_ready.go:92] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:18.322799   61500 pod_ready.go:81] duration metric: took 8.506833323s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:18.322808   61500 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:20.328344   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:22.331157   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:21.685033   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:21.688407   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:21.688774   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:21.688809   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:21.689010   62747 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0416 01:00:21.693612   62747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:21.707524   62747 kubeadm.go:877] updating cluster {Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:00:21.707657   62747 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 01:00:21.707699   62747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:21.748697   62747 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 01:00:21.748785   62747 ssh_runner.go:195] Run: which lz4
	I0416 01:00:21.753521   62747 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:00:21.758125   62747 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:00:21.758158   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 01:00:22.545403   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:23.045303   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:23.544984   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:24.045882   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:24.545194   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:25.045010   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:25.545278   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:26.045702   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:26.545233   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:27.045814   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:22.429595   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.430124   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.430159   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:22.430087   63461 retry.go:31] will retry after 495.681984ms: waiting for machine to come up
	I0416 01:00:22.927476   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.927932   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.927959   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:22.927875   63461 retry.go:31] will retry after 506.264557ms: waiting for machine to come up
	I0416 01:00:23.435290   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:23.435742   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:23.435773   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:23.435689   63461 retry.go:31] will retry after 826.359716ms: waiting for machine to come up
	I0416 01:00:24.263672   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:24.264151   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:24.264183   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:24.264107   63461 retry.go:31] will retry after 873.35176ms: waiting for machine to come up
	I0416 01:00:25.138864   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:25.139318   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:25.139340   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:25.139308   63461 retry.go:31] will retry after 1.129546887s: waiting for machine to come up
	I0416 01:00:26.270364   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:26.270968   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:26.271000   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:26.270902   63461 retry.go:31] will retry after 1.441466368s: waiting for machine to come up
	I0416 01:00:24.830562   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:26.832057   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:23.353811   62747 crio.go:462] duration metric: took 1.600325005s to copy over tarball
	I0416 01:00:23.353885   62747 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:00:25.815443   62747 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.46152973s)
	I0416 01:00:25.815479   62747 crio.go:469] duration metric: took 2.461639439s to extract the tarball
	I0416 01:00:25.815489   62747 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:00:25.862653   62747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:25.914416   62747 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 01:00:25.914444   62747 cache_images.go:84] Images are preloaded, skipping loading
	I0416 01:00:25.914454   62747 kubeadm.go:928] updating node { 192.168.61.225 8443 v1.29.3 crio true true} ...
	I0416 01:00:25.914586   62747 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-617092 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 01:00:25.914680   62747 ssh_runner.go:195] Run: crio config
	I0416 01:00:25.970736   62747 cni.go:84] Creating CNI manager for ""
	I0416 01:00:25.970760   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:25.970773   62747 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:00:25.970796   62747 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.225 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-617092 NodeName:embed-certs-617092 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 01:00:25.970949   62747 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-617092"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:00:25.971022   62747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 01:00:25.985111   62747 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:00:25.985198   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:00:25.996306   62747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0416 01:00:26.013401   62747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:00:26.030094   62747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0416 01:00:26.048252   62747 ssh_runner.go:195] Run: grep 192.168.61.225	control-plane.minikube.internal$ /etc/hosts
	I0416 01:00:26.052717   62747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:26.069538   62747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:26.205867   62747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:00:26.224210   62747 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092 for IP: 192.168.61.225
	I0416 01:00:26.224237   62747 certs.go:194] generating shared ca certs ...
	I0416 01:00:26.224259   62747 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:26.224459   62747 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:00:26.224520   62747 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:00:26.224532   62747 certs.go:256] generating profile certs ...
	I0416 01:00:26.224646   62747 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/client.key
	I0416 01:00:26.224723   62747 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/apiserver.key.383097d4
	I0416 01:00:26.224773   62747 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/proxy-client.key
	I0416 01:00:26.224932   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:00:26.224973   62747 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:00:26.224982   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:00:26.225014   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:00:26.225050   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:00:26.225085   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:00:26.225126   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:26.225872   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:00:26.282272   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:00:26.329827   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:00:26.366744   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:00:26.405845   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0416 01:00:26.440535   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 01:00:26.465371   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:00:26.491633   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 01:00:26.518682   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:00:26.543992   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:00:26.573728   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:00:26.602308   62747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:00:26.622491   62747 ssh_runner.go:195] Run: openssl version
	I0416 01:00:26.628805   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:00:26.643163   62747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:26.648292   62747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:26.648351   62747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:26.654890   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:00:26.668501   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:00:26.682038   62747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:00:26.687327   62747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:00:26.687388   62747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:00:26.693557   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:00:26.706161   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:00:26.718432   62747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:00:26.722989   62747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:00:26.723050   62747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:00:26.729311   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:00:26.744138   62747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:00:26.749490   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 01:00:26.756478   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 01:00:26.763326   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 01:00:26.770194   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 01:00:26.776641   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 01:00:26.783022   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 01:00:26.789543   62747 kubeadm.go:391] StartCluster: {Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:00:26.789654   62747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:00:26.789717   62747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:26.831148   62747 cri.go:89] found id: ""
	I0416 01:00:26.831219   62747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 01:00:26.844372   62747 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 01:00:26.844398   62747 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 01:00:26.844403   62747 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 01:00:26.844454   62747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 01:00:26.858173   62747 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 01:00:26.859210   62747 kubeconfig.go:125] found "embed-certs-617092" server: "https://192.168.61.225:8443"
	I0416 01:00:26.861233   62747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 01:00:26.874068   62747 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.225
	I0416 01:00:26.874105   62747 kubeadm.go:1154] stopping kube-system containers ...
	I0416 01:00:26.874119   62747 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 01:00:26.874177   62747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:26.926456   62747 cri.go:89] found id: ""
	I0416 01:00:26.926537   62747 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 01:00:26.945874   62747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:00:26.960207   62747 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:00:26.960229   62747 kubeadm.go:156] found existing configuration files:
	
	I0416 01:00:26.960282   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:00:26.971895   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:00:26.971958   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:00:26.982956   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:00:26.993935   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:00:26.994000   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:00:27.005216   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:00:27.015624   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:00:27.015680   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:00:27.026513   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:00:27.037062   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:00:27.037118   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:00:27.048173   62747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:00:27.061987   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:27.190243   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:27.545025   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:28.045752   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:28.545833   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.045264   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.545316   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:30.045594   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:30.545046   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:31.045139   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:31.545251   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:32.045710   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:27.714372   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:27.714822   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:27.714854   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:27.714767   63461 retry.go:31] will retry after 1.810511131s: waiting for machine to come up
	I0416 01:00:29.527497   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:29.528041   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:29.528072   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:29.527983   63461 retry.go:31] will retry after 2.163921338s: waiting for machine to come up
	I0416 01:00:31.694203   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:31.694741   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:31.694769   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:31.694714   63461 retry.go:31] will retry after 2.245150923s: waiting for machine to come up
	I0416 01:00:29.332159   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:31.332218   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:28.252295   62747 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.062013928s)
	I0416 01:00:28.252331   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:28.468110   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:28.553370   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:28.676185   62747 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:28.676273   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.176826   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.676498   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.702138   62747 api_server.go:72] duration metric: took 1.025950998s to wait for apiserver process to appear ...
	I0416 01:00:29.702170   62747 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:00:29.702192   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:29.702822   62747 api_server.go:269] stopped: https://192.168.61.225:8443/healthz: Get "https://192.168.61.225:8443/healthz": dial tcp 192.168.61.225:8443: connect: connection refused
	I0416 01:00:30.203298   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:32.951714   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:32.951754   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:32.951779   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:33.003631   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:33.003672   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:33.202825   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:33.208168   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:33.208201   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:33.702532   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:33.712501   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:33.712542   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:34.203157   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:34.210567   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:34.210597   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:34.702568   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:34.711690   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 200:
	ok
	I0416 01:00:34.723252   62747 api_server.go:141] control plane version: v1.29.3
	I0416 01:00:34.723279   62747 api_server.go:131] duration metric: took 5.021102658s to wait for apiserver health ...
	I0416 01:00:34.723287   62747 cni.go:84] Creating CNI manager for ""
	I0416 01:00:34.723293   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:34.724989   62747 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:00:32.545963   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.045020   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.545657   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:34.045706   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:34.544972   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:35.045252   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:35.545087   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:36.045080   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:36.545787   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:37.045046   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.942412   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:33.942923   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:33.942952   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:33.942870   63461 retry.go:31] will retry after 3.750613392s: waiting for machine to come up
	I0416 01:00:33.829307   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:35.830613   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:34.726400   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:00:34.746294   62747 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:00:34.767028   62747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:00:34.778610   62747 system_pods.go:59] 8 kube-system pods found
	I0416 01:00:34.778653   62747 system_pods.go:61] "coredns-76f75df574-dxzhk" [a71b29ec-8602-47d6-825c-a1a54a1758d0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:00:34.778664   62747 system_pods.go:61] "etcd-embed-certs-617092" [8966501b-6a06-4e0b-acb6-77df5f53cd3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 01:00:34.778674   62747 system_pods.go:61] "kube-apiserver-embed-certs-617092" [7ad29687-3964-4a5b-8939-bcf3dc71d578] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 01:00:34.778685   62747 system_pods.go:61] "kube-controller-manager-embed-certs-617092" [78b21361-f302-43f3-8356-ea15fad4edb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 01:00:34.778695   62747 system_pods.go:61] "kube-proxy-xtdf4" [4e8fe1da-9a02-428e-94f1-595f2e9170e0] Running
	I0416 01:00:34.778703   62747 system_pods.go:61] "kube-scheduler-embed-certs-617092" [c03d87b4-26d3-4bff-8f53-8844260f1ed8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 01:00:34.778720   62747 system_pods.go:61] "metrics-server-57f55c9bc5-knnvn" [4607d12d-25db-4637-be17-e2665970c0a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:00:34.778729   62747 system_pods.go:61] "storage-provisioner" [41362b6c-fde7-45fa-b6cf-1d7acef3d4ce] Running
	I0416 01:00:34.778741   62747 system_pods.go:74] duration metric: took 11.690083ms to wait for pod list to return data ...
	I0416 01:00:34.778755   62747 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:00:34.782283   62747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:00:34.782319   62747 node_conditions.go:123] node cpu capacity is 2
	I0416 01:00:34.782329   62747 node_conditions.go:105] duration metric: took 3.566074ms to run NodePressure ...
	I0416 01:00:34.782344   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:35.056194   62747 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 01:00:35.068546   62747 kubeadm.go:733] kubelet initialised
	I0416 01:00:35.068571   62747 kubeadm.go:734] duration metric: took 12.345347ms waiting for restarted kubelet to initialise ...
	I0416 01:00:35.068581   62747 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:00:35.075013   62747 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-dxzhk" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:37.081976   62747 pod_ready.go:102] pod "coredns-76f75df574-dxzhk" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:37.697323   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.697830   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has current primary IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.697857   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Found IP for machine: 192.168.50.216
	I0416 01:00:37.697873   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Reserving static IP address...
	I0416 01:00:37.698323   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Reserved static IP address: 192.168.50.216
	I0416 01:00:37.698345   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for SSH to be available...
	I0416 01:00:37.698372   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-653942", mac: "52:54:00:4b:a2:47", ip: "192.168.50.216"} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.698418   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | skip adding static IP to network mk-default-k8s-diff-port-653942 - found existing host DHCP lease matching {name: "default-k8s-diff-port-653942", mac: "52:54:00:4b:a2:47", ip: "192.168.50.216"}
	I0416 01:00:37.698450   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Getting to WaitForSSH function...
	I0416 01:00:37.700942   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.701312   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.701346   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.701520   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Using SSH client type: external
	I0416 01:00:37.701567   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa (-rw-------)
	I0416 01:00:37.701621   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 01:00:37.701676   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | About to run SSH command:
	I0416 01:00:37.701712   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | exit 0
	I0416 01:00:37.829860   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | SSH cmd err, output: <nil>: 
	I0416 01:00:37.830254   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetConfigRaw
	I0416 01:00:37.830931   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:37.833361   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.833755   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.833788   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.834026   61267 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/config.json ...
	I0416 01:00:37.834198   61267 machine.go:94] provisionDockerMachine start ...
	I0416 01:00:37.834214   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:37.834426   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:37.836809   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.837221   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.837251   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.837377   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:37.837588   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.837737   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.837869   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:37.838023   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:37.838208   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:37.838219   61267 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 01:00:37.950999   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 01:00:37.951031   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 01:00:37.951271   61267 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653942"
	I0416 01:00:37.951303   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 01:00:37.951483   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:37.954395   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.954730   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.954755   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.954949   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:37.955165   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.955344   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.955549   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:37.955756   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:37.955980   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:37.956001   61267 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-653942 && echo "default-k8s-diff-port-653942" | sudo tee /etc/hostname
	I0416 01:00:38.085650   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-653942
	
	I0416 01:00:38.085682   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.088689   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.089031   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.089060   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.089297   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.089474   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.089623   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.089780   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.089948   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:38.090127   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:38.090146   61267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-653942' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-653942/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-653942' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 01:00:38.214653   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 01:00:38.214734   61267 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 01:00:38.214760   61267 buildroot.go:174] setting up certificates
	I0416 01:00:38.214773   61267 provision.go:84] configureAuth start
	I0416 01:00:38.214785   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 01:00:38.215043   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:38.217744   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.218145   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.218174   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.218336   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.220861   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.221187   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.221216   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.221343   61267 provision.go:143] copyHostCerts
	I0416 01:00:38.221405   61267 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 01:00:38.221426   61267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 01:00:38.221492   61267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 01:00:38.221638   61267 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 01:00:38.221649   61267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 01:00:38.221685   61267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 01:00:38.221777   61267 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 01:00:38.221787   61267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 01:00:38.221815   61267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 01:00:38.221887   61267 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-653942 san=[127.0.0.1 192.168.50.216 default-k8s-diff-port-653942 localhost minikube]
	I0416 01:00:38.266327   61267 provision.go:177] copyRemoteCerts
	I0416 01:00:38.266390   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 01:00:38.266422   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.269080   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.269546   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.269583   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.269901   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.270115   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.270259   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.270444   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:38.352861   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 01:00:38.380995   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0416 01:00:38.405746   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 01:00:38.431467   61267 provision.go:87] duration metric: took 216.680985ms to configureAuth
	I0416 01:00:38.431502   61267 buildroot.go:189] setting minikube options for container-runtime
	I0416 01:00:38.431674   61267 config.go:182] Loaded profile config "default-k8s-diff-port-653942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:00:38.431740   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.434444   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.434867   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.434909   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.435032   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.435245   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.435380   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.435568   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.435744   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:38.435948   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:38.435974   61267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 01:00:38.729392   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 01:00:38.729421   61267 machine.go:97] duration metric: took 895.211347ms to provisionDockerMachine
	I0416 01:00:38.729432   61267 start.go:293] postStartSetup for "default-k8s-diff-port-653942" (driver="kvm2")
	I0416 01:00:38.729442   61267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 01:00:38.729463   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.729802   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 01:00:38.729826   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.732755   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.733135   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.733181   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.733326   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.733490   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.733649   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.733784   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:38.819006   61267 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 01:00:38.823781   61267 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 01:00:38.823804   61267 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 01:00:38.823870   61267 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 01:00:38.823967   61267 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 01:00:38.824077   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 01:00:38.833958   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:38.859934   61267 start.go:296] duration metric: took 130.488205ms for postStartSetup
	I0416 01:00:38.859973   61267 fix.go:56] duration metric: took 18.845458863s for fixHost
	I0416 01:00:38.859992   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.862557   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.862889   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.862927   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.863016   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.863236   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.863426   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.863609   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.863786   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:38.863951   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:38.863961   61267 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 01:00:38.970405   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229238.936521840
	
	I0416 01:00:38.970431   61267 fix.go:216] guest clock: 1713229238.936521840
	I0416 01:00:38.970440   61267 fix.go:229] Guest: 2024-04-16 01:00:38.93652184 +0000 UTC Remote: 2024-04-16 01:00:38.859976379 +0000 UTC m=+356.490123424 (delta=76.545461ms)
	I0416 01:00:38.970489   61267 fix.go:200] guest clock delta is within tolerance: 76.545461ms
	I0416 01:00:38.970496   61267 start.go:83] releasing machines lock for "default-k8s-diff-port-653942", held for 18.956013216s
	I0416 01:00:38.970522   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.970806   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:38.973132   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.973440   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.973455   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.973646   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.974142   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.974332   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.974388   61267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 01:00:38.974432   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.974532   61267 ssh_runner.go:195] Run: cat /version.json
	I0416 01:00:38.974556   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.977284   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977459   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977624   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.977653   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977746   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.977774   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977800   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.978002   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.978017   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.978163   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.978169   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.978296   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:38.978314   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.978440   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:39.090827   61267 ssh_runner.go:195] Run: systemctl --version
	I0416 01:00:39.097716   61267 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 01:00:39.249324   61267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 01:00:39.256333   61267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 01:00:39.256402   61267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 01:00:39.272367   61267 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 01:00:39.272395   61267 start.go:494] detecting cgroup driver to use...
	I0416 01:00:39.272446   61267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 01:00:39.291713   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 01:00:39.305645   61267 docker.go:217] disabling cri-docker service (if available) ...
	I0416 01:00:39.305708   61267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 01:00:39.320731   61267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 01:00:39.336917   61267 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 01:00:39.450840   61267 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:00:39.596905   61267 docker.go:233] disabling docker service ...
	I0416 01:00:39.596972   61267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:00:39.612926   61267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:00:39.627583   61267 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:00:39.778135   61267 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:00:39.900216   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:00:39.914697   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:00:39.935875   61267 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 01:00:39.935930   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.946510   61267 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:00:39.946569   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.956794   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.966968   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.977207   61267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:00:39.988817   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:40.001088   61267 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:40.018950   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:40.030395   61267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:00:40.039956   61267 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:00:40.040013   61267 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:00:40.053877   61267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:00:40.065292   61267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:40.221527   61267 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:00:40.382800   61267 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:00:40.382880   61267 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:00:40.387842   61267 start.go:562] Will wait 60s for crictl version
	I0416 01:00:40.387897   61267 ssh_runner.go:195] Run: which crictl
	I0416 01:00:40.393774   61267 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:00:40.435784   61267 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:00:40.435864   61267 ssh_runner.go:195] Run: crio --version
	I0416 01:00:40.468702   61267 ssh_runner.go:195] Run: crio --version
	I0416 01:00:40.501355   61267 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 01:00:37.545192   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:38.045346   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:38.545599   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:39.045109   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:39.545360   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.045058   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.545745   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:41.045943   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:41.545900   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:42.045807   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.502716   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:40.505958   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:40.506353   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:40.506384   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:40.506597   61267 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0416 01:00:40.511238   61267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:40.525378   61267 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-653942 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-653942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.216 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:00:40.525519   61267 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 01:00:40.525586   61267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:40.570378   61267 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 01:00:40.570451   61267 ssh_runner.go:195] Run: which lz4
	I0416 01:00:40.575413   61267 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:00:40.580583   61267 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:00:40.580640   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 01:00:42.194745   61267 crio.go:462] duration metric: took 1.619375861s to copy over tarball
	I0416 01:00:42.194821   61267 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:00:37.830710   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:39.831822   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:42.330821   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:39.086761   62747 pod_ready.go:102] pod "coredns-76f75df574-dxzhk" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:40.082847   62747 pod_ready.go:92] pod "coredns-76f75df574-dxzhk" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:40.082868   62747 pod_ready.go:81] duration metric: took 5.007825454s for pod "coredns-76f75df574-dxzhk" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:40.082877   62747 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:42.092402   62747 pod_ready.go:92] pod "etcd-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:42.092425   62747 pod_ready.go:81] duration metric: took 2.009541778s for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:42.092438   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:42.545278   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:43.045894   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:43.545886   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.044964   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.544997   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:45.045340   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:45.545257   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:46.045108   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:46.544994   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:47.045987   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.671272   61267 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.476407392s)
	I0416 01:00:44.671304   61267 crio.go:469] duration metric: took 2.476532286s to extract the tarball
	I0416 01:00:44.671315   61267 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:00:44.709451   61267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:44.754382   61267 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 01:00:44.754412   61267 cache_images.go:84] Images are preloaded, skipping loading
	I0416 01:00:44.754424   61267 kubeadm.go:928] updating node { 192.168.50.216 8444 v1.29.3 crio true true} ...
	I0416 01:00:44.754543   61267 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-653942 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-653942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 01:00:44.754613   61267 ssh_runner.go:195] Run: crio config
	I0416 01:00:44.806896   61267 cni.go:84] Creating CNI manager for ""
	I0416 01:00:44.806918   61267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:44.806926   61267 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:00:44.806957   61267 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.216 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-653942 NodeName:default-k8s-diff-port-653942 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 01:00:44.807089   61267 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.216
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-653942"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.216
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.216"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:00:44.807144   61267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 01:00:44.821347   61267 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:00:44.821425   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:00:44.835415   61267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0416 01:00:44.855797   61267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:00:44.873694   61267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0416 01:00:44.892535   61267 ssh_runner.go:195] Run: grep 192.168.50.216	control-plane.minikube.internal$ /etc/hosts
	I0416 01:00:44.896538   61267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:44.909516   61267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:45.024588   61267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:00:45.055414   61267 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942 for IP: 192.168.50.216
	I0416 01:00:45.055440   61267 certs.go:194] generating shared ca certs ...
	I0416 01:00:45.055460   61267 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:45.055622   61267 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:00:45.055680   61267 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:00:45.055695   61267 certs.go:256] generating profile certs ...
	I0416 01:00:45.055815   61267 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.key
	I0416 01:00:45.055905   61267 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/apiserver.key.6620f6bf
	I0416 01:00:45.055975   61267 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/proxy-client.key
	I0416 01:00:45.056139   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:00:45.056185   61267 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:00:45.056195   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:00:45.056234   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:00:45.056268   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:00:45.056295   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:00:45.056355   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:45.057033   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:00:45.091704   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:00:45.154257   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:00:45.181077   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:00:45.222401   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0416 01:00:45.248568   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 01:00:45.277927   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:00:45.310417   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 01:00:45.341109   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:00:45.367056   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:00:45.395117   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:00:45.421921   61267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:00:45.440978   61267 ssh_runner.go:195] Run: openssl version
	I0416 01:00:45.447132   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:00:45.460008   61267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:00:45.464820   61267 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:00:45.464884   61267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:00:45.471232   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:00:45.482567   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:00:45.493541   61267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:45.498792   61267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:45.498849   61267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:45.505511   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:00:45.517533   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:00:45.529908   61267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:00:45.535120   61267 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:00:45.535181   61267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:00:45.541232   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:00:45.552946   61267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:00:45.559947   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 01:00:45.567567   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 01:00:45.575204   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 01:00:45.582057   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 01:00:45.588418   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 01:00:45.595517   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 01:00:45.602108   61267 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-653942 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-653942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.216 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:00:45.602213   61267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:00:45.602256   61267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:45.639538   61267 cri.go:89] found id: ""
	I0416 01:00:45.639621   61267 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 01:00:45.651216   61267 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 01:00:45.651245   61267 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 01:00:45.651252   61267 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 01:00:45.651307   61267 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 01:00:45.662522   61267 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 01:00:45.663697   61267 kubeconfig.go:125] found "default-k8s-diff-port-653942" server: "https://192.168.50.216:8444"
	I0416 01:00:45.666034   61267 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 01:00:45.675864   61267 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.216
	I0416 01:00:45.675900   61267 kubeadm.go:1154] stopping kube-system containers ...
	I0416 01:00:45.675927   61267 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 01:00:45.675992   61267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:45.718679   61267 cri.go:89] found id: ""
	I0416 01:00:45.718744   61267 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 01:00:45.737326   61267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:00:45.748122   61267 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:00:45.748146   61267 kubeadm.go:156] found existing configuration files:
	
	I0416 01:00:45.748200   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0416 01:00:45.758556   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:00:45.758618   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:00:45.769601   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0416 01:00:45.779361   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:00:45.779424   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:00:45.789283   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0416 01:00:45.798712   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:00:45.798805   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:00:45.808489   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0416 01:00:45.817400   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:00:45.817469   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:00:45.827902   61267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:00:45.838031   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:45.962948   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:46.862340   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:47.092144   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:47.170078   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:47.284634   61267 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:47.284719   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.830534   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:47.474148   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:44.100441   62747 pod_ready.go:102] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:47.472666   62747 pod_ready.go:102] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:47.599694   62747 pod_ready.go:92] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.599722   62747 pod_ready.go:81] duration metric: took 5.507276982s for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.599734   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.604479   62747 pod_ready.go:92] pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.604496   62747 pod_ready.go:81] duration metric: took 4.755735ms for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.604504   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xtdf4" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.608936   62747 pod_ready.go:92] pod "kube-proxy-xtdf4" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.608951   62747 pod_ready.go:81] duration metric: took 4.441482ms for pod "kube-proxy-xtdf4" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.608959   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.613108   62747 pod_ready.go:92] pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.613123   62747 pod_ready.go:81] duration metric: took 4.157722ms for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.613130   62747 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.545567   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.045898   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.545631   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:49.045678   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:49.545274   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:50.045281   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:50.545926   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:51.045076   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:51.545303   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:52.045271   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:47.785698   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.284828   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.315894   61267 api_server.go:72] duration metric: took 1.031258915s to wait for apiserver process to appear ...
	I0416 01:00:48.315925   61267 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:00:48.315950   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:51.781922   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:51.781957   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:51.781976   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:51.830460   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:51.830491   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:51.830505   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:51.858205   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:51.858240   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:52.316376   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:52.332667   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:52.332700   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:49.829236   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:52.329805   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:49.620626   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:51.620730   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:52.816565   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:52.827158   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:52.827191   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:53.316864   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:53.321112   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 200:
	ok
	I0416 01:00:53.329289   61267 api_server.go:141] control plane version: v1.29.3
	I0416 01:00:53.329320   61267 api_server.go:131] duration metric: took 5.013387579s to wait for apiserver health ...
	I0416 01:00:53.329331   61267 cni.go:84] Creating CNI manager for ""
	I0416 01:00:53.329340   61267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:53.331125   61267 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:00:52.545407   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.044961   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.545290   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:54.044994   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:54.545292   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:55.045285   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:55.545909   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:56.045029   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:56.545343   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:57.044988   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.332626   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:00:53.366364   61267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:00:53.401881   61267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:00:53.413478   61267 system_pods.go:59] 8 kube-system pods found
	I0416 01:00:53.413512   61267 system_pods.go:61] "coredns-76f75df574-cvlpq" [c200d470-26dd-40ea-a79b-29d9104122bb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:00:53.413527   61267 system_pods.go:61] "etcd-default-k8s-diff-port-653942" [24e85fc2-fb57-4ef6-9817-846207109e61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 01:00:53.413537   61267 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653942" [bd473e94-72a6-4391-b787-49e16e8a213f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 01:00:53.413547   61267 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653942" [31ed7183-a12b-422c-9e67-bba91147347a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 01:00:53.413555   61267 system_pods.go:61] "kube-proxy-6q9k7" [ba6d9cf9-37a5-4e01-9489-ce7395fd2a38] Running
	I0416 01:00:53.413563   61267 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653942" [4b481275-4ded-4251-963f-910954f10d15] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 01:00:53.413579   61267 system_pods.go:61] "metrics-server-57f55c9bc5-9cnv2" [24905ded-5bf8-4b34-8069-2e65c5ad8f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:00:53.413592   61267 system_pods.go:61] "storage-provisioner" [16ba28d0-2031-4c21-9c22-1b9289517449] Running
	I0416 01:00:53.413601   61267 system_pods.go:74] duration metric: took 11.695334ms to wait for pod list to return data ...
	I0416 01:00:53.413613   61267 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:00:53.417579   61267 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:00:53.417609   61267 node_conditions.go:123] node cpu capacity is 2
	I0416 01:00:53.417623   61267 node_conditions.go:105] duration metric: took 4.002735ms to run NodePressure ...
	I0416 01:00:53.417642   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:53.688389   61267 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 01:00:53.692755   61267 kubeadm.go:733] kubelet initialised
	I0416 01:00:53.692777   61267 kubeadm.go:734] duration metric: took 4.359298ms waiting for restarted kubelet to initialise ...
	I0416 01:00:53.692784   61267 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:00:53.698521   61267 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-cvlpq" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.704496   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "coredns-76f75df574-cvlpq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.704532   61267 pod_ready.go:81] duration metric: took 5.98382ms for pod "coredns-76f75df574-cvlpq" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.704543   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "coredns-76f75df574-cvlpq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.704550   61267 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.713110   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.713144   61267 pod_ready.go:81] duration metric: took 8.58568ms for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.713188   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.713201   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.718190   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.718210   61267 pod_ready.go:81] duration metric: took 4.997527ms for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.718219   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.718224   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.805697   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.805727   61267 pod_ready.go:81] duration metric: took 87.493805ms for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.805738   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.805743   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6q9k7" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:54.205884   61267 pod_ready.go:92] pod "kube-proxy-6q9k7" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:54.205911   61267 pod_ready.go:81] duration metric: took 400.161115ms for pod "kube-proxy-6q9k7" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:54.205921   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:56.213276   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:54.829391   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:57.330218   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:54.119995   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:56.121220   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:57.545333   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.045305   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.545871   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:59.045432   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:59.545000   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:00.045001   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:00.545855   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:01.045812   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:01.545477   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:02.045635   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.215064   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:00.215192   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:59.330599   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:01.831017   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:58.620594   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:01.120516   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:02.545690   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:03.045754   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:03.544965   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:04.045062   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:04.545196   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:05.045986   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:05.545246   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:06.045853   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:06.545863   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:07.045209   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:02.712971   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:04.713437   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:07.212886   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:04.328673   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:06.329726   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:03.124343   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:05.619912   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:07.622044   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:07.544952   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:08.045290   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:08.545296   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:09.045795   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:09.545932   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:10.045124   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:10.045209   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:10.087200   62139 cri.go:89] found id: ""
	I0416 01:01:10.087229   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.087237   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:10.087243   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:10.087300   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:10.126194   62139 cri.go:89] found id: ""
	I0416 01:01:10.126218   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.126225   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:10.126230   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:10.126275   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:10.165238   62139 cri.go:89] found id: ""
	I0416 01:01:10.165271   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.165282   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:10.165290   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:10.165357   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:10.202896   62139 cri.go:89] found id: ""
	I0416 01:01:10.202934   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.202945   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:10.202952   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:10.203015   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:10.243576   62139 cri.go:89] found id: ""
	I0416 01:01:10.243605   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.243613   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:10.243619   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:10.243667   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:10.278637   62139 cri.go:89] found id: ""
	I0416 01:01:10.278661   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.278669   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:10.278674   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:10.278726   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:10.316811   62139 cri.go:89] found id: ""
	I0416 01:01:10.316844   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.316852   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:10.316857   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:10.316914   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:10.359934   62139 cri.go:89] found id: ""
	I0416 01:01:10.359960   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.359967   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:10.359975   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:10.359987   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:10.413082   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:10.413119   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:10.428605   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:10.428632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:10.552536   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:10.552561   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:10.552578   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:10.615054   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:10.615091   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:08.213557   61267 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:01:08.213584   61267 pod_ready.go:81] duration metric: took 14.007657025s for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:01:08.213594   61267 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace to be "Ready" ...
	I0416 01:01:10.224984   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:08.831515   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:11.330529   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:10.122213   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:12.621939   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:13.160749   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:13.178449   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:13.178505   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:13.224192   62139 cri.go:89] found id: ""
	I0416 01:01:13.224215   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.224222   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:13.224228   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:13.224287   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:13.261441   62139 cri.go:89] found id: ""
	I0416 01:01:13.261469   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.261476   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:13.261481   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:13.261545   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:13.296602   62139 cri.go:89] found id: ""
	I0416 01:01:13.296636   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.296647   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:13.296654   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:13.296720   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:13.333944   62139 cri.go:89] found id: ""
	I0416 01:01:13.333968   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.333977   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:13.333984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:13.334049   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:13.372919   62139 cri.go:89] found id: ""
	I0416 01:01:13.372944   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.372957   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:13.372965   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:13.373022   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:13.413257   62139 cri.go:89] found id: ""
	I0416 01:01:13.413287   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.413299   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:13.413306   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:13.413373   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:13.451705   62139 cri.go:89] found id: ""
	I0416 01:01:13.451737   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.451748   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:13.451755   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:13.451836   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:13.492549   62139 cri.go:89] found id: ""
	I0416 01:01:13.492576   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.492586   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:13.492597   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:13.492613   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:13.547267   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:13.547303   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:13.568975   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:13.569002   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:13.674444   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:13.674469   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:13.674482   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:13.745111   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:13.745145   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:16.286955   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:16.301151   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:16.301257   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:16.337516   62139 cri.go:89] found id: ""
	I0416 01:01:16.337544   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.337554   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:16.337561   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:16.337623   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:16.372674   62139 cri.go:89] found id: ""
	I0416 01:01:16.372702   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.372712   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:16.372720   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:16.372783   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:16.411181   62139 cri.go:89] found id: ""
	I0416 01:01:16.411208   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.411224   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:16.411230   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:16.411283   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:16.449063   62139 cri.go:89] found id: ""
	I0416 01:01:16.449102   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.449109   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:16.449114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:16.449183   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:16.491877   62139 cri.go:89] found id: ""
	I0416 01:01:16.491909   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.491918   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:16.491924   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:16.491981   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:16.532522   62139 cri.go:89] found id: ""
	I0416 01:01:16.532553   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.532564   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:16.532572   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:16.532633   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:16.572194   62139 cri.go:89] found id: ""
	I0416 01:01:16.572222   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.572233   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:16.572240   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:16.572302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:16.614671   62139 cri.go:89] found id: ""
	I0416 01:01:16.614697   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.614704   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:16.614712   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:16.614726   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:16.632146   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:16.632179   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:16.707597   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:16.707621   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:16.707633   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:16.783604   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:16.783640   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:16.828937   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:16.828977   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:12.721088   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:15.220256   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:17.222263   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:13.830983   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:16.329120   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:15.119386   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:17.120038   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:19.385008   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:19.400949   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:19.401035   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:19.463792   62139 cri.go:89] found id: ""
	I0416 01:01:19.463825   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.463836   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:19.463843   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:19.463910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:19.523289   62139 cri.go:89] found id: ""
	I0416 01:01:19.523322   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.523332   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:19.523340   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:19.523392   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:19.558891   62139 cri.go:89] found id: ""
	I0416 01:01:19.558928   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.558939   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:19.558946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:19.559009   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:19.597876   62139 cri.go:89] found id: ""
	I0416 01:01:19.597905   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.597917   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:19.597925   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:19.597980   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:19.637536   62139 cri.go:89] found id: ""
	I0416 01:01:19.637563   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.637571   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:19.637576   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:19.637623   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:19.674414   62139 cri.go:89] found id: ""
	I0416 01:01:19.674447   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.674458   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:19.674465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:19.674525   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:19.709717   62139 cri.go:89] found id: ""
	I0416 01:01:19.709751   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.709761   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:19.709769   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:19.709837   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:19.747458   62139 cri.go:89] found id: ""
	I0416 01:01:19.747482   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.747489   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:19.747505   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:19.747523   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:19.834811   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:19.834846   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:19.876398   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:19.876428   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:19.931596   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:19.931632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:19.947074   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:19.947103   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:20.023434   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:19.720883   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:21.721969   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:18.829276   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:20.829405   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:19.120254   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:21.120520   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:22.524036   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:22.539399   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:22.539488   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:22.574696   62139 cri.go:89] found id: ""
	I0416 01:01:22.574723   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.574733   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:22.574741   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:22.574805   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:22.617474   62139 cri.go:89] found id: ""
	I0416 01:01:22.617503   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.617514   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:22.617521   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:22.617579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:22.657744   62139 cri.go:89] found id: ""
	I0416 01:01:22.657773   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.657781   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:22.657786   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:22.657842   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:22.695513   62139 cri.go:89] found id: ""
	I0416 01:01:22.695544   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.695552   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:22.695557   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:22.695606   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:22.732943   62139 cri.go:89] found id: ""
	I0416 01:01:22.732973   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.732983   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:22.732990   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:22.733051   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:22.768735   62139 cri.go:89] found id: ""
	I0416 01:01:22.768767   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.768775   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:22.768782   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:22.768842   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:22.804330   62139 cri.go:89] found id: ""
	I0416 01:01:22.804352   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.804361   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:22.804367   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:22.804425   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:22.842165   62139 cri.go:89] found id: ""
	I0416 01:01:22.842192   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.842199   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:22.842207   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:22.842219   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:22.921859   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:22.921880   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:22.921893   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:23.003432   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:23.003468   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:23.045446   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:23.045476   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:23.097327   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:23.097358   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:25.612297   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:25.627489   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:25.627565   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:25.664040   62139 cri.go:89] found id: ""
	I0416 01:01:25.664072   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.664083   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:25.664091   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:25.664149   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:25.701004   62139 cri.go:89] found id: ""
	I0416 01:01:25.701029   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.701036   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:25.701042   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:25.701087   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:25.740108   62139 cri.go:89] found id: ""
	I0416 01:01:25.740136   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.740144   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:25.740150   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:25.740194   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:25.778413   62139 cri.go:89] found id: ""
	I0416 01:01:25.778447   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.778458   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:25.778465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:25.778530   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:25.815188   62139 cri.go:89] found id: ""
	I0416 01:01:25.815215   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.815223   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:25.815230   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:25.815277   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:25.856370   62139 cri.go:89] found id: ""
	I0416 01:01:25.856402   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.856410   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:25.856416   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:25.856476   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:25.895363   62139 cri.go:89] found id: ""
	I0416 01:01:25.895388   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.895396   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:25.895402   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:25.895455   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:25.931854   62139 cri.go:89] found id: ""
	I0416 01:01:25.931881   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.931889   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:25.931897   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:25.931923   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:26.008395   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:26.008419   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:26.008436   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:26.087946   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:26.087983   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:26.134693   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:26.134725   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:26.189618   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:26.189652   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:24.220798   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:26.221193   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:22.833917   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:25.331147   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:27.331702   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:23.620819   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:25.621119   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:28.705010   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:28.719575   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:28.719644   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:28.759011   62139 cri.go:89] found id: ""
	I0416 01:01:28.759037   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.759044   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:28.759050   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:28.759112   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:28.794640   62139 cri.go:89] found id: ""
	I0416 01:01:28.794675   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.794687   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:28.794695   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:28.794807   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:28.835634   62139 cri.go:89] found id: ""
	I0416 01:01:28.835663   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.835674   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:28.835681   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:28.835747   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:28.875384   62139 cri.go:89] found id: ""
	I0416 01:01:28.875408   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.875426   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:28.875433   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:28.875484   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:28.921202   62139 cri.go:89] found id: ""
	I0416 01:01:28.921234   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.921244   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:28.921252   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:28.921314   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:28.958791   62139 cri.go:89] found id: ""
	I0416 01:01:28.958820   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.958828   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:28.958834   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:28.958923   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:28.996136   62139 cri.go:89] found id: ""
	I0416 01:01:28.996168   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.996179   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:28.996185   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:28.996259   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:29.033912   62139 cri.go:89] found id: ""
	I0416 01:01:29.033939   62139 logs.go:276] 0 containers: []
	W0416 01:01:29.033946   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:29.033954   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:29.033969   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:29.114162   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:29.114209   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:29.153934   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:29.153965   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:29.207548   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:29.207584   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:29.222158   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:29.222184   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:29.297414   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:31.798026   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:31.812740   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:31.812815   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:31.855058   62139 cri.go:89] found id: ""
	I0416 01:01:31.855087   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.855098   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:31.855105   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:31.855172   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:31.897128   62139 cri.go:89] found id: ""
	I0416 01:01:31.897170   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.897192   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:31.897200   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:31.897259   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:31.934497   62139 cri.go:89] found id: ""
	I0416 01:01:31.934520   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.934532   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:31.934541   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:31.934588   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:31.974020   62139 cri.go:89] found id: ""
	I0416 01:01:31.974051   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.974062   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:31.974093   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:31.974163   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:32.015433   62139 cri.go:89] found id: ""
	I0416 01:01:32.015460   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.015471   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:32.015477   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:32.015540   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:32.058286   62139 cri.go:89] found id: ""
	I0416 01:01:32.058336   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.058345   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:32.058351   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:32.058408   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:28.720596   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:30.720732   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:29.828996   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:31.830765   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:28.121038   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:30.619604   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:32.620210   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:32.100331   62139 cri.go:89] found id: ""
	I0416 01:01:32.102041   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.102054   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:32.102061   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:32.102115   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:32.141420   62139 cri.go:89] found id: ""
	I0416 01:01:32.141446   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.141454   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:32.141462   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:32.141473   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:32.195323   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:32.195364   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:32.210180   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:32.210206   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:32.282548   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:32.282570   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:32.282585   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:32.360627   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:32.360663   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:34.901239   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:34.917097   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:34.917205   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:34.959297   62139 cri.go:89] found id: ""
	I0416 01:01:34.959327   62139 logs.go:276] 0 containers: []
	W0416 01:01:34.959337   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:34.959344   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:34.959422   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:35.000927   62139 cri.go:89] found id: ""
	I0416 01:01:35.000974   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.000984   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:35.001000   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:35.001064   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:35.038049   62139 cri.go:89] found id: ""
	I0416 01:01:35.038073   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.038082   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:35.038090   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:35.038143   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:35.075396   62139 cri.go:89] found id: ""
	I0416 01:01:35.075467   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.075481   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:35.075490   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:35.075591   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:35.114297   62139 cri.go:89] found id: ""
	I0416 01:01:35.114325   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.114335   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:35.114343   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:35.114405   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:35.152075   62139 cri.go:89] found id: ""
	I0416 01:01:35.152099   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.152106   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:35.152112   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:35.152161   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:35.187945   62139 cri.go:89] found id: ""
	I0416 01:01:35.187974   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.187984   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:35.187991   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:35.188057   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:35.225225   62139 cri.go:89] found id: ""
	I0416 01:01:35.225253   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.225262   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:35.225272   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:35.225287   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:35.279584   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:35.279628   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:35.293416   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:35.293456   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:35.370122   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:35.370147   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:35.370159   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:35.451482   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:35.451517   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:32.723226   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:35.221390   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:34.329009   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:36.329761   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:34.620492   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:36.620527   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:37.994358   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:38.008209   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:38.008277   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:38.047905   62139 cri.go:89] found id: ""
	I0416 01:01:38.047943   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.047955   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:38.047962   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:38.048016   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:38.085749   62139 cri.go:89] found id: ""
	I0416 01:01:38.085780   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.085790   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:38.085797   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:38.085864   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:38.122396   62139 cri.go:89] found id: ""
	I0416 01:01:38.122419   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.122427   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:38.122432   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:38.122479   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:38.159284   62139 cri.go:89] found id: ""
	I0416 01:01:38.159313   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.159322   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:38.159329   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:38.159390   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:38.193245   62139 cri.go:89] found id: ""
	I0416 01:01:38.193280   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.193291   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:38.193298   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:38.193362   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:38.229147   62139 cri.go:89] found id: ""
	I0416 01:01:38.229179   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.229188   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:38.229194   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:38.229251   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:38.267285   62139 cri.go:89] found id: ""
	I0416 01:01:38.267309   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.267317   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:38.267321   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:38.267389   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:38.305181   62139 cri.go:89] found id: ""
	I0416 01:01:38.305207   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.305215   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:38.305222   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:38.305237   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:38.321714   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:38.321742   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:38.398352   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:38.398372   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:38.398382   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:38.474095   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:38.474129   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:38.520540   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:38.520581   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:41.072083   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:41.086767   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:41.086860   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:41.125119   62139 cri.go:89] found id: ""
	I0416 01:01:41.125149   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.125175   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:41.125182   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:41.125253   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:41.159885   62139 cri.go:89] found id: ""
	I0416 01:01:41.159915   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.159925   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:41.159931   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:41.160012   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:41.196334   62139 cri.go:89] found id: ""
	I0416 01:01:41.196366   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.196377   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:41.196385   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:41.196447   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:41.234254   62139 cri.go:89] found id: ""
	I0416 01:01:41.234282   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.234300   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:41.234319   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:41.234413   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:41.271499   62139 cri.go:89] found id: ""
	I0416 01:01:41.271523   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.271531   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:41.271536   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:41.271604   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:41.311064   62139 cri.go:89] found id: ""
	I0416 01:01:41.311096   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.311107   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:41.311114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:41.311179   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:41.349012   62139 cri.go:89] found id: ""
	I0416 01:01:41.349043   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.349053   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:41.349060   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:41.349117   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:41.385258   62139 cri.go:89] found id: ""
	I0416 01:01:41.385298   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.385305   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:41.385315   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:41.385330   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:41.470086   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:41.470130   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:41.513835   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:41.513870   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:41.565980   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:41.566013   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:41.582647   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:41.582678   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:41.658928   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:37.724628   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:40.222025   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:38.329899   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:40.330143   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:39.120850   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:41.121383   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:44.159107   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:44.173015   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:44.173088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:44.214310   62139 cri.go:89] found id: ""
	I0416 01:01:44.214345   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.214363   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:44.214374   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:44.214462   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:44.256476   62139 cri.go:89] found id: ""
	I0416 01:01:44.256503   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.256511   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:44.256516   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:44.256577   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:44.298047   62139 cri.go:89] found id: ""
	I0416 01:01:44.298079   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.298089   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:44.298097   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:44.298158   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:44.339165   62139 cri.go:89] found id: ""
	I0416 01:01:44.339196   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.339206   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:44.339213   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:44.339280   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:44.378078   62139 cri.go:89] found id: ""
	I0416 01:01:44.378108   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.378116   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:44.378122   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:44.378170   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:44.421494   62139 cri.go:89] found id: ""
	I0416 01:01:44.421525   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.421536   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:44.421543   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:44.421609   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:44.459919   62139 cri.go:89] found id: ""
	I0416 01:01:44.459948   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.459958   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:44.459965   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:44.460025   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:44.499448   62139 cri.go:89] found id: ""
	I0416 01:01:44.499479   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.499489   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:44.499500   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:44.499516   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:44.555122   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:44.555159   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:44.572048   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:44.572075   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:44.646252   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:44.646283   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:44.646299   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:44.730593   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:44.730620   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:42.720855   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:44.723141   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:46.723452   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:42.831045   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:45.329039   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:47.331355   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:43.619897   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:45.620068   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:47.620162   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:47.276658   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:47.291354   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:47.291431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:47.334998   62139 cri.go:89] found id: ""
	I0416 01:01:47.335036   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.335055   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:47.335062   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:47.335121   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:47.376546   62139 cri.go:89] found id: ""
	I0416 01:01:47.376575   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.376582   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:47.376587   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:47.376647   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:47.418609   62139 cri.go:89] found id: ""
	I0416 01:01:47.418642   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.418654   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:47.418661   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:47.418721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:47.459432   62139 cri.go:89] found id: ""
	I0416 01:01:47.459458   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.459465   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:47.459470   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:47.459518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:47.497776   62139 cri.go:89] found id: ""
	I0416 01:01:47.497800   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.497808   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:47.497813   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:47.497866   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:47.536803   62139 cri.go:89] found id: ""
	I0416 01:01:47.536835   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.536842   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:47.536849   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:47.536916   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:47.575883   62139 cri.go:89] found id: ""
	I0416 01:01:47.575916   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.575923   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:47.575931   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:47.575976   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:47.627676   62139 cri.go:89] found id: ""
	I0416 01:01:47.627697   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.627703   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:47.627711   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:47.627725   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:47.669714   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:47.669745   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:47.721349   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:47.721389   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:47.735833   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:47.735859   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:47.806890   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:47.806913   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:47.806925   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:50.386960   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:50.400832   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:50.400901   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:50.443042   62139 cri.go:89] found id: ""
	I0416 01:01:50.443076   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.443086   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:50.443094   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:50.443157   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:50.480495   62139 cri.go:89] found id: ""
	I0416 01:01:50.480526   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.480536   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:50.480544   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:50.480602   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:50.516578   62139 cri.go:89] found id: ""
	I0416 01:01:50.516605   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.516613   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:50.516618   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:50.516676   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:50.555302   62139 cri.go:89] found id: ""
	I0416 01:01:50.555330   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.555337   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:50.555344   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:50.555388   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:50.594647   62139 cri.go:89] found id: ""
	I0416 01:01:50.594674   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.594682   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:50.594688   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:50.594737   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:50.633401   62139 cri.go:89] found id: ""
	I0416 01:01:50.633428   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.633436   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:50.633442   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:50.633501   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:50.673714   62139 cri.go:89] found id: ""
	I0416 01:01:50.673744   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.673755   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:50.673763   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:50.673811   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:50.710103   62139 cri.go:89] found id: ""
	I0416 01:01:50.710127   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.710134   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:50.710142   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:50.710153   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:50.765121   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:50.765168   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:50.780407   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:50.780436   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:50.855602   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:50.855635   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:50.855663   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:50.937249   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:50.937283   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:49.220483   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:51.724129   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:49.829742   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:52.330579   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:49.621383   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:52.120841   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:53.481261   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:53.495872   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:53.495931   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:53.532710   62139 cri.go:89] found id: ""
	I0416 01:01:53.532738   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.532748   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:53.532756   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:53.532815   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:53.568734   62139 cri.go:89] found id: ""
	I0416 01:01:53.568763   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.568770   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:53.568776   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:53.568841   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:53.608937   62139 cri.go:89] found id: ""
	I0416 01:01:53.608965   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.608976   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:53.608984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:53.609042   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:53.646538   62139 cri.go:89] found id: ""
	I0416 01:01:53.646573   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.646585   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:53.646592   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:53.646657   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:53.687761   62139 cri.go:89] found id: ""
	I0416 01:01:53.687792   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.687801   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:53.687809   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:53.687872   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:53.726126   62139 cri.go:89] found id: ""
	I0416 01:01:53.726161   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.726169   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:53.726174   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:53.726224   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:53.762583   62139 cri.go:89] found id: ""
	I0416 01:01:53.762609   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.762618   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:53.762625   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:53.762695   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:53.803685   62139 cri.go:89] found id: ""
	I0416 01:01:53.803715   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.803726   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:53.803737   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:53.803751   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:53.862215   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:53.862255   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:53.877713   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:53.877743   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:53.953394   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:53.953422   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:53.953438   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:54.044657   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:54.044698   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:56.602100   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:56.616548   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:56.616632   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:56.653765   62139 cri.go:89] found id: ""
	I0416 01:01:56.653794   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.653810   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:56.653817   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:56.653879   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:56.691394   62139 cri.go:89] found id: ""
	I0416 01:01:56.691416   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.691422   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:56.691428   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:56.691475   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:56.728995   62139 cri.go:89] found id: ""
	I0416 01:01:56.729017   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.729024   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:56.729029   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:56.729078   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:56.769119   62139 cri.go:89] found id: ""
	I0416 01:01:56.769184   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.769196   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:56.769204   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:56.769270   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:56.810562   62139 cri.go:89] found id: ""
	I0416 01:01:56.810589   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.810597   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:56.810608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:56.810669   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:56.849367   62139 cri.go:89] found id: ""
	I0416 01:01:56.849392   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.849399   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:56.849405   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:56.849464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:56.887330   62139 cri.go:89] found id: ""
	I0416 01:01:56.887359   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.887370   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:56.887378   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:56.887461   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:56.926636   62139 cri.go:89] found id: ""
	I0416 01:01:56.926664   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.926672   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:56.926682   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:56.926697   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:56.981836   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:56.981875   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:56.996385   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:56.996411   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:57.071026   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:57.071054   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:57.071070   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:54.219668   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:56.221212   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:54.829549   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:56.831452   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:54.619864   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:56.620968   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:57.155430   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:57.155466   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:59.701547   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:59.714465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:59.714526   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:59.759791   62139 cri.go:89] found id: ""
	I0416 01:01:59.759830   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.759841   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:59.759849   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:59.759914   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:59.813303   62139 cri.go:89] found id: ""
	I0416 01:01:59.813334   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.813343   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:59.813353   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:59.813406   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:59.872291   62139 cri.go:89] found id: ""
	I0416 01:01:59.872328   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.872338   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:59.872347   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:59.872423   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:59.910397   62139 cri.go:89] found id: ""
	I0416 01:01:59.910425   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.910437   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:59.910444   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:59.910512   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:59.953656   62139 cri.go:89] found id: ""
	I0416 01:01:59.953685   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.953695   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:59.953703   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:59.953779   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:59.993193   62139 cri.go:89] found id: ""
	I0416 01:01:59.993220   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.993229   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:59.993239   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:59.993298   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:00.030205   62139 cri.go:89] found id: ""
	I0416 01:02:00.030229   62139 logs.go:276] 0 containers: []
	W0416 01:02:00.030237   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:00.030242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:00.030302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:00.068160   62139 cri.go:89] found id: ""
	I0416 01:02:00.068189   62139 logs.go:276] 0 containers: []
	W0416 01:02:00.068199   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:00.068211   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:00.068226   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:00.149383   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:00.149416   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:00.188000   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:00.188025   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:00.240522   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:00.240550   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:00.254189   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:00.254215   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:00.331483   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:58.721272   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:01.220698   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:59.329440   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:01.830408   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:59.122269   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:01.619839   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:02.832656   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:02.846826   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:02.846907   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:02.883397   62139 cri.go:89] found id: ""
	I0416 01:02:02.883428   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.883439   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:02.883446   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:02.883499   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:02.923686   62139 cri.go:89] found id: ""
	I0416 01:02:02.923708   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.923715   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:02.923719   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:02.923770   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:02.964155   62139 cri.go:89] found id: ""
	I0416 01:02:02.964180   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.964188   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:02.964193   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:02.964247   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:03.005357   62139 cri.go:89] found id: ""
	I0416 01:02:03.005386   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.005396   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:03.005403   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:03.005464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:03.047221   62139 cri.go:89] found id: ""
	I0416 01:02:03.047246   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.047257   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:03.047264   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:03.047326   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:03.088737   62139 cri.go:89] found id: ""
	I0416 01:02:03.088767   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.088776   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:03.088784   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:03.088846   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:03.129756   62139 cri.go:89] found id: ""
	I0416 01:02:03.129778   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.129785   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:03.129790   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:03.129837   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:03.169422   62139 cri.go:89] found id: ""
	I0416 01:02:03.169447   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.169459   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:03.169468   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:03.169478   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:03.246485   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:03.246503   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:03.246514   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:03.326498   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:03.326533   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:03.372788   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:03.372817   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:03.428561   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:03.428603   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:05.944274   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:05.957744   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:05.957813   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:05.993348   62139 cri.go:89] found id: ""
	I0416 01:02:05.993400   62139 logs.go:276] 0 containers: []
	W0416 01:02:05.993411   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:05.993430   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:05.993497   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:06.034811   62139 cri.go:89] found id: ""
	I0416 01:02:06.034848   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.034859   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:06.034866   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:06.034953   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:06.079047   62139 cri.go:89] found id: ""
	I0416 01:02:06.079070   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.079078   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:06.079082   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:06.079127   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:06.122494   62139 cri.go:89] found id: ""
	I0416 01:02:06.122513   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.122520   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:06.122525   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:06.122589   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:06.163436   62139 cri.go:89] found id: ""
	I0416 01:02:06.163461   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.163468   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:06.163473   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:06.163534   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:06.205036   62139 cri.go:89] found id: ""
	I0416 01:02:06.205064   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.205072   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:06.205077   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:06.205134   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:06.242056   62139 cri.go:89] found id: ""
	I0416 01:02:06.242084   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.242094   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:06.242107   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:06.242166   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:06.278604   62139 cri.go:89] found id: ""
	I0416 01:02:06.278636   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.278646   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:06.278656   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:06.278671   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:06.334631   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:06.334658   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:06.348199   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:06.348227   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:06.424774   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:06.424793   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:06.424804   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:06.503509   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:06.503542   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:03.221238   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:05.721006   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:04.329267   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:06.329476   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:03.620957   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:06.121348   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:09.046665   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:09.061072   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:09.061173   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:09.097482   62139 cri.go:89] found id: ""
	I0416 01:02:09.097514   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.097524   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:09.097543   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:09.097613   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:09.135124   62139 cri.go:89] found id: ""
	I0416 01:02:09.135157   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.135168   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:09.135175   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:09.135236   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:09.173887   62139 cri.go:89] found id: ""
	I0416 01:02:09.173912   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.173920   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:09.173925   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:09.173983   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:09.209658   62139 cri.go:89] found id: ""
	I0416 01:02:09.209683   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.209691   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:09.209702   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:09.209763   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:09.249149   62139 cri.go:89] found id: ""
	I0416 01:02:09.249200   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.249209   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:09.249214   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:09.249292   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:09.291447   62139 cri.go:89] found id: ""
	I0416 01:02:09.291477   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.291487   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:09.291494   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:09.291553   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:09.329248   62139 cri.go:89] found id: ""
	I0416 01:02:09.329271   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.329281   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:09.329288   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:09.329345   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:09.365585   62139 cri.go:89] found id: ""
	I0416 01:02:09.365613   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.365622   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:09.365632   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:09.365645   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:09.418998   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:09.419031   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:09.433531   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:09.433558   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:09.508543   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:09.508573   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:09.508588   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:09.593889   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:09.593930   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:08.220704   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:10.221232   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.224680   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:08.330281   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:10.828856   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:08.619632   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:10.619780   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.621319   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.139020   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:12.154268   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:12.154349   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:12.192717   62139 cri.go:89] found id: ""
	I0416 01:02:12.192746   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.192758   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:12.192765   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:12.192832   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:12.230633   62139 cri.go:89] found id: ""
	I0416 01:02:12.230662   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.230674   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:12.230681   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:12.230729   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:12.271108   62139 cri.go:89] found id: ""
	I0416 01:02:12.271150   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.271161   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:12.271168   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:12.271233   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:12.310161   62139 cri.go:89] found id: ""
	I0416 01:02:12.310186   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.310194   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:12.310201   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:12.310272   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:12.349638   62139 cri.go:89] found id: ""
	I0416 01:02:12.349668   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.349678   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:12.349686   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:12.349766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:12.391565   62139 cri.go:89] found id: ""
	I0416 01:02:12.391597   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.391607   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:12.391620   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:12.391681   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:12.429142   62139 cri.go:89] found id: ""
	I0416 01:02:12.429186   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.429195   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:12.429200   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:12.429249   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:12.466209   62139 cri.go:89] found id: ""
	I0416 01:02:12.466238   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.466249   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:12.466260   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:12.466277   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:12.551333   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:12.551355   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:12.551367   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:12.634465   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:12.634496   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:12.675198   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:12.675231   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:12.728933   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:12.728962   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:15.243521   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:15.258589   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:15.258657   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:15.301901   62139 cri.go:89] found id: ""
	I0416 01:02:15.301931   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.301943   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:15.301951   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:15.302006   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:15.345932   62139 cri.go:89] found id: ""
	I0416 01:02:15.346011   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.346032   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:15.346043   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:15.346113   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:15.387957   62139 cri.go:89] found id: ""
	I0416 01:02:15.387983   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.387991   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:15.387996   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:15.388044   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:15.424887   62139 cri.go:89] found id: ""
	I0416 01:02:15.424916   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.424927   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:15.424934   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:15.424996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:15.460088   62139 cri.go:89] found id: ""
	I0416 01:02:15.460113   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.460120   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:15.460125   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:15.460172   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:15.495567   62139 cri.go:89] found id: ""
	I0416 01:02:15.495597   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.495607   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:15.495615   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:15.495692   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:15.533901   62139 cri.go:89] found id: ""
	I0416 01:02:15.533931   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.533940   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:15.533946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:15.533996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:15.576665   62139 cri.go:89] found id: ""
	I0416 01:02:15.576692   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.576702   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:15.576712   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:15.576728   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:15.626933   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:15.626961   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:15.681627   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:15.681656   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:15.695572   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:15.695608   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:15.768910   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:15.768934   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:15.768945   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:14.720472   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:16.722418   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.830086   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:14.830540   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:17.329838   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:15.120394   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:17.120523   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:18.349776   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:18.363499   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:18.363568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:18.404210   62139 cri.go:89] found id: ""
	I0416 01:02:18.404234   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.404241   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:18.404246   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:18.404304   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:18.444610   62139 cri.go:89] found id: ""
	I0416 01:02:18.444641   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.444651   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:18.444658   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:18.444722   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:18.483134   62139 cri.go:89] found id: ""
	I0416 01:02:18.483160   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.483168   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:18.483173   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:18.483220   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:18.522120   62139 cri.go:89] found id: ""
	I0416 01:02:18.522144   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.522156   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:18.522161   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:18.522205   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:18.566293   62139 cri.go:89] found id: ""
	I0416 01:02:18.566319   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.566327   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:18.566332   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:18.566391   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:18.604000   62139 cri.go:89] found id: ""
	I0416 01:02:18.604028   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.604036   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:18.604042   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:18.604089   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:18.641967   62139 cri.go:89] found id: ""
	I0416 01:02:18.641999   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.642009   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:18.642016   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:18.642080   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:18.683494   62139 cri.go:89] found id: ""
	I0416 01:02:18.683533   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.683544   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:18.683555   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:18.683570   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:18.761674   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:18.761699   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:18.761714   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:18.849959   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:18.849995   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:18.895534   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:18.895570   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:18.949287   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:18.949320   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:21.464393   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:21.479019   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:21.479087   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:21.516262   62139 cri.go:89] found id: ""
	I0416 01:02:21.516303   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.516313   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:21.516323   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:21.516385   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:21.554279   62139 cri.go:89] found id: ""
	I0416 01:02:21.554315   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.554327   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:21.554334   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:21.554393   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:21.590889   62139 cri.go:89] found id: ""
	I0416 01:02:21.590918   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.590928   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:21.590935   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:21.590996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:21.629925   62139 cri.go:89] found id: ""
	I0416 01:02:21.629955   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.629965   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:21.629972   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:21.630032   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:21.667947   62139 cri.go:89] found id: ""
	I0416 01:02:21.667975   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.667983   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:21.667988   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:21.668045   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:21.706275   62139 cri.go:89] found id: ""
	I0416 01:02:21.706308   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.706318   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:21.706326   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:21.706392   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:21.748077   62139 cri.go:89] found id: ""
	I0416 01:02:21.748106   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.748117   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:21.748123   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:21.748170   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:21.785441   62139 cri.go:89] found id: ""
	I0416 01:02:21.785467   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.785477   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:21.785488   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:21.785510   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:21.824702   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:21.824735   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:21.882780   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:21.882810   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:21.897211   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:21.897236   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:21.971882   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:21.971903   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:21.971915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:19.220913   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:21.721219   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:19.330086   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:21.836759   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:19.620521   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:21.621229   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:24.550749   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:24.564951   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:24.565024   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:24.605025   62139 cri.go:89] found id: ""
	I0416 01:02:24.605055   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.605063   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:24.605068   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:24.605142   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:24.640727   62139 cri.go:89] found id: ""
	I0416 01:02:24.640757   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.640764   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:24.640769   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:24.640822   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:24.678031   62139 cri.go:89] found id: ""
	I0416 01:02:24.678060   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.678068   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:24.678074   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:24.678125   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:24.714854   62139 cri.go:89] found id: ""
	I0416 01:02:24.714896   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.714907   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:24.714914   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:24.714981   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:24.752129   62139 cri.go:89] found id: ""
	I0416 01:02:24.752158   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.752168   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:24.752177   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:24.752243   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:24.788507   62139 cri.go:89] found id: ""
	I0416 01:02:24.788541   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.788551   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:24.788557   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:24.788617   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:24.828379   62139 cri.go:89] found id: ""
	I0416 01:02:24.828409   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.828419   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:24.828427   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:24.828486   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:24.865676   62139 cri.go:89] found id: ""
	I0416 01:02:24.865707   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.865717   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:24.865725   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:24.865736   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:24.941057   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:24.941079   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:24.941091   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:25.025937   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:25.025979   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:25.065828   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:25.065871   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:25.128004   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:25.128039   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:24.221435   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:26.720181   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:24.329677   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:26.329901   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:24.119781   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:26.120316   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:27.643201   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:27.658601   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:27.658660   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:27.700627   62139 cri.go:89] found id: ""
	I0416 01:02:27.700650   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.700657   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:27.700662   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:27.700718   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:27.734929   62139 cri.go:89] found id: ""
	I0416 01:02:27.734957   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.734966   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:27.734975   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:27.735046   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:27.772412   62139 cri.go:89] found id: ""
	I0416 01:02:27.772440   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.772448   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:27.772454   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:27.772514   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:27.809436   62139 cri.go:89] found id: ""
	I0416 01:02:27.809459   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.809466   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:27.809471   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:27.809518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:27.845717   62139 cri.go:89] found id: ""
	I0416 01:02:27.845746   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.845756   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:27.845764   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:27.845825   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:27.887224   62139 cri.go:89] found id: ""
	I0416 01:02:27.887250   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.887260   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:27.887267   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:27.887334   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:27.920945   62139 cri.go:89] found id: ""
	I0416 01:02:27.920974   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.920984   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:27.920992   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:27.921066   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:27.960933   62139 cri.go:89] found id: ""
	I0416 01:02:27.960959   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.960966   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:27.960974   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:27.960985   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:28.013003   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:28.013033   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:28.026599   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:28.026626   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:28.117200   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:28.117226   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:28.117240   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:28.198003   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:28.198036   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:30.741379   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:30.757102   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:30.757199   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:30.798038   62139 cri.go:89] found id: ""
	I0416 01:02:30.798068   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.798075   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:30.798080   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:30.798137   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:30.844840   62139 cri.go:89] found id: ""
	I0416 01:02:30.844862   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.844871   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:30.844877   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:30.844944   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:30.883816   62139 cri.go:89] found id: ""
	I0416 01:02:30.883841   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.883849   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:30.883855   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:30.883903   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:30.919353   62139 cri.go:89] found id: ""
	I0416 01:02:30.919380   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.919389   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:30.919396   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:30.919457   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:30.957036   62139 cri.go:89] found id: ""
	I0416 01:02:30.957061   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.957069   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:30.957084   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:30.957143   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:30.993179   62139 cri.go:89] found id: ""
	I0416 01:02:30.993211   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.993220   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:30.993228   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:30.993315   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:31.032634   62139 cri.go:89] found id: ""
	I0416 01:02:31.032661   62139 logs.go:276] 0 containers: []
	W0416 01:02:31.032670   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:31.032684   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:31.032753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:31.069345   62139 cri.go:89] found id: ""
	I0416 01:02:31.069373   62139 logs.go:276] 0 containers: []
	W0416 01:02:31.069382   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:31.069392   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:31.069408   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:31.123989   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:31.124017   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:31.140998   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:31.141032   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:31.217496   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:31.218063   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:31.218098   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:31.296811   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:31.296858   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:28.720502   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:30.720709   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:28.329978   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:30.829406   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:28.121200   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:30.620659   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:33.842516   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:33.872440   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:33.872518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:33.909287   62139 cri.go:89] found id: ""
	I0416 01:02:33.909314   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.909324   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:33.909329   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:33.909388   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:33.947531   62139 cri.go:89] found id: ""
	I0416 01:02:33.947566   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.947576   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:33.947584   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:33.947642   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:33.990084   62139 cri.go:89] found id: ""
	I0416 01:02:33.990118   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.990129   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:33.990136   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:33.990200   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:34.024121   62139 cri.go:89] found id: ""
	I0416 01:02:34.024151   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.024159   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:34.024165   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:34.024218   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:34.061075   62139 cri.go:89] found id: ""
	I0416 01:02:34.061104   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.061111   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:34.061116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:34.061179   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:34.097887   62139 cri.go:89] found id: ""
	I0416 01:02:34.097928   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.097938   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:34.097946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:34.098007   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:34.135541   62139 cri.go:89] found id: ""
	I0416 01:02:34.135567   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.135577   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:34.135585   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:34.135637   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:34.170884   62139 cri.go:89] found id: ""
	I0416 01:02:34.170910   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.170920   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:34.170931   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:34.170946   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:34.223465   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:34.223494   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:34.238898   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:34.238929   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:34.316916   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:34.316946   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:34.316962   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:34.401564   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:34.401600   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:36.945789   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:36.959707   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:36.959774   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:36.994463   62139 cri.go:89] found id: ""
	I0416 01:02:36.994497   62139 logs.go:276] 0 containers: []
	W0416 01:02:36.994508   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:36.994515   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:36.994579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:37.028847   62139 cri.go:89] found id: ""
	I0416 01:02:37.028877   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.028887   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:37.028893   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:37.028954   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:37.061841   62139 cri.go:89] found id: ""
	I0416 01:02:37.061872   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.061882   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:37.061889   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:37.061954   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:37.098460   62139 cri.go:89] found id: ""
	I0416 01:02:37.098485   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.098495   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:37.098502   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:37.098569   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:33.220794   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:35.221650   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:37.222563   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:32.829517   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:34.829762   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:36.831773   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:33.121842   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:35.620647   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:37.620795   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:37.133016   62139 cri.go:89] found id: ""
	I0416 01:02:37.133044   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.133053   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:37.133059   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:37.133122   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:37.170252   62139 cri.go:89] found id: ""
	I0416 01:02:37.170276   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.170286   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:37.170293   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:37.170354   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:37.206114   62139 cri.go:89] found id: ""
	I0416 01:02:37.206141   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.206148   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:37.206153   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:37.206208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:37.241353   62139 cri.go:89] found id: ""
	I0416 01:02:37.241383   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.241395   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:37.241405   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:37.241429   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:37.293452   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:37.293483   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:37.309885   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:37.309926   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:37.385455   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:37.385481   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:37.385496   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:37.463064   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:37.463101   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:40.008717   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:40.022249   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:40.022327   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:40.064444   62139 cri.go:89] found id: ""
	I0416 01:02:40.064479   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.064490   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:40.064497   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:40.064545   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:40.100326   62139 cri.go:89] found id: ""
	I0416 01:02:40.100353   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.100361   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:40.100366   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:40.100413   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:40.138818   62139 cri.go:89] found id: ""
	I0416 01:02:40.138857   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.138869   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:40.138878   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:40.138928   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:40.184203   62139 cri.go:89] found id: ""
	I0416 01:02:40.184234   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.184244   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:40.184252   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:40.184311   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:40.221968   62139 cri.go:89] found id: ""
	I0416 01:02:40.221991   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.221998   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:40.222007   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:40.222088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:40.265621   62139 cri.go:89] found id: ""
	I0416 01:02:40.265643   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.265650   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:40.265657   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:40.265723   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:40.314121   62139 cri.go:89] found id: ""
	I0416 01:02:40.314152   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.314163   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:40.314170   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:40.314229   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:40.359788   62139 cri.go:89] found id: ""
	I0416 01:02:40.359825   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.359836   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:40.359849   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:40.359863   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:40.431678   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:40.431718   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:40.449847   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:40.449877   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:40.524271   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:40.524297   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:40.524309   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:40.601398   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:40.601433   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:39.720606   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:41.721437   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:39.330974   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:41.830050   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:40.120785   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:42.123996   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:43.145431   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:43.160269   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:43.160338   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:43.196603   62139 cri.go:89] found id: ""
	I0416 01:02:43.196637   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.196648   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:43.196655   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:43.196716   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:43.235863   62139 cri.go:89] found id: ""
	I0416 01:02:43.235893   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.235905   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:43.235911   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:43.235971   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:43.271408   62139 cri.go:89] found id: ""
	I0416 01:02:43.271437   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.271444   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:43.271450   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:43.271512   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:43.310931   62139 cri.go:89] found id: ""
	I0416 01:02:43.310958   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.310965   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:43.310971   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:43.311032   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:43.347472   62139 cri.go:89] found id: ""
	I0416 01:02:43.347502   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.347512   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:43.347520   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:43.347581   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:43.387326   62139 cri.go:89] found id: ""
	I0416 01:02:43.387361   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.387372   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:43.387429   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:43.387506   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:43.425099   62139 cri.go:89] found id: ""
	I0416 01:02:43.425122   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.425130   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:43.425141   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:43.425208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:43.461364   62139 cri.go:89] found id: ""
	I0416 01:02:43.461397   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.461408   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:43.461419   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:43.461434   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:43.514520   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:43.514556   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:43.528740   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:43.528777   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:43.599010   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:43.599035   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:43.599051   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:43.682913   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:43.682959   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:46.231398   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:46.260247   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:46.260338   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:46.304498   62139 cri.go:89] found id: ""
	I0416 01:02:46.304521   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.304528   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:46.304534   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:46.304600   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:46.364055   62139 cri.go:89] found id: ""
	I0416 01:02:46.364081   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.364090   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:46.364098   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:46.364167   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:46.412395   62139 cri.go:89] found id: ""
	I0416 01:02:46.412437   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.412475   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:46.412510   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:46.412584   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:46.453669   62139 cri.go:89] found id: ""
	I0416 01:02:46.453698   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.453709   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:46.453716   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:46.453766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:46.490667   62139 cri.go:89] found id: ""
	I0416 01:02:46.490699   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.490709   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:46.490715   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:46.490766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:46.529405   62139 cri.go:89] found id: ""
	I0416 01:02:46.529443   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.529460   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:46.529467   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:46.529527   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:46.565359   62139 cri.go:89] found id: ""
	I0416 01:02:46.565384   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.565391   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:46.565396   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:46.565451   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:46.609381   62139 cri.go:89] found id: ""
	I0416 01:02:46.609406   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.609413   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:46.609421   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:46.609432   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:46.663080   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:46.663112   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:46.677303   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:46.677338   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:46.750134   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:46.750163   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:46.750175   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:46.829395   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:46.829434   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:43.721477   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:46.220462   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:43.831829   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:46.329333   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:44.619712   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:46.621271   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:49.374356   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:49.390674   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:49.390753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:49.427968   62139 cri.go:89] found id: ""
	I0416 01:02:49.427993   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.428000   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:49.428005   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:49.428058   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:49.461821   62139 cri.go:89] found id: ""
	I0416 01:02:49.461850   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.461857   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:49.461863   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:49.461918   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:49.496305   62139 cri.go:89] found id: ""
	I0416 01:02:49.496356   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.496364   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:49.496369   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:49.496429   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:49.536096   62139 cri.go:89] found id: ""
	I0416 01:02:49.536122   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.536129   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:49.536134   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:49.536194   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:49.572078   62139 cri.go:89] found id: ""
	I0416 01:02:49.572106   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.572115   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:49.572122   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:49.572181   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:49.607803   62139 cri.go:89] found id: ""
	I0416 01:02:49.607835   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.607847   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:49.607861   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:49.607915   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:49.651245   62139 cri.go:89] found id: ""
	I0416 01:02:49.651272   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.651280   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:49.651285   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:49.651332   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:49.693587   62139 cri.go:89] found id: ""
	I0416 01:02:49.693612   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.693622   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:49.693632   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:49.693646   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:49.750003   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:49.750032   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:49.764447   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:49.764472   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:49.844739   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:49.844764   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:49.844780   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:49.924260   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:49.924294   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:48.220753   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:50.220986   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:48.330946   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:50.829409   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:49.120516   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:51.619516   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:52.467399   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:52.481656   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:52.481729   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:52.518506   62139 cri.go:89] found id: ""
	I0416 01:02:52.518531   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.518537   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:52.518544   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:52.518599   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:52.554799   62139 cri.go:89] found id: ""
	I0416 01:02:52.554820   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.554827   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:52.554832   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:52.554888   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:52.597236   62139 cri.go:89] found id: ""
	I0416 01:02:52.597265   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.597272   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:52.597278   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:52.597335   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:52.635544   62139 cri.go:89] found id: ""
	I0416 01:02:52.635567   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.635578   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:52.635585   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:52.635639   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:52.672715   62139 cri.go:89] found id: ""
	I0416 01:02:52.672739   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.672746   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:52.672751   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:52.672808   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:52.711600   62139 cri.go:89] found id: ""
	I0416 01:02:52.711631   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.711640   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:52.711648   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:52.711718   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:52.750372   62139 cri.go:89] found id: ""
	I0416 01:02:52.750405   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.750416   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:52.750423   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:52.750486   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:52.786651   62139 cri.go:89] found id: ""
	I0416 01:02:52.786678   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.786688   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:52.786698   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:52.786712   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:52.840262   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:52.840296   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:52.854734   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:52.854762   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:52.931182   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:52.931211   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:52.931226   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:53.007023   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:53.007061   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:55.548305   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:55.562483   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:55.562562   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:55.599480   62139 cri.go:89] found id: ""
	I0416 01:02:55.599504   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.599511   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:55.599517   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:55.599573   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:55.636832   62139 cri.go:89] found id: ""
	I0416 01:02:55.636862   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.636873   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:55.636879   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:55.636940   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:55.676211   62139 cri.go:89] found id: ""
	I0416 01:02:55.676240   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.676250   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:55.676256   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:55.676318   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:55.713498   62139 cri.go:89] found id: ""
	I0416 01:02:55.713527   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.713537   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:55.713544   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:55.713604   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:55.754239   62139 cri.go:89] found id: ""
	I0416 01:02:55.754276   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.754284   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:55.754301   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:55.754355   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:55.792073   62139 cri.go:89] found id: ""
	I0416 01:02:55.792106   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.792117   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:55.792125   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:55.792191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:55.829635   62139 cri.go:89] found id: ""
	I0416 01:02:55.829665   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.829676   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:55.829683   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:55.829742   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:55.876417   62139 cri.go:89] found id: ""
	I0416 01:02:55.876443   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.876450   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:55.876458   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:55.876471   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:55.926670   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:55.926707   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:55.941660   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:55.941696   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:56.018776   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:56.018806   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:56.018820   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:56.097335   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:56.097378   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:52.720703   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:55.221614   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:52.830970   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:55.329886   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:53.620969   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:56.122135   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:58.642188   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:58.655537   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:58.655605   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:58.692091   62139 cri.go:89] found id: ""
	I0416 01:02:58.692116   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.692124   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:58.692129   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:58.692191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:58.729434   62139 cri.go:89] found id: ""
	I0416 01:02:58.729461   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.729472   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:58.729491   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:58.729568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:58.765879   62139 cri.go:89] found id: ""
	I0416 01:02:58.765907   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.765916   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:58.765924   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:58.765987   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:58.802285   62139 cri.go:89] found id: ""
	I0416 01:02:58.802323   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.802334   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:58.802342   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:58.802399   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:58.841357   62139 cri.go:89] found id: ""
	I0416 01:02:58.841385   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.841396   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:58.841403   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:58.841464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:58.876982   62139 cri.go:89] found id: ""
	I0416 01:02:58.877022   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.877032   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:58.877040   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:58.877108   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:58.915563   62139 cri.go:89] found id: ""
	I0416 01:02:58.915596   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.915607   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:58.915614   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:58.915683   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:58.951268   62139 cri.go:89] found id: ""
	I0416 01:02:58.951303   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.951313   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:58.951324   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:58.951341   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:59.004673   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:59.004710   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:59.019393   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:59.019423   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:59.091587   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:59.091612   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:59.091632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:59.169623   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:59.169655   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:01.710597   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:01.724394   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:01.724463   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:01.761577   62139 cri.go:89] found id: ""
	I0416 01:03:01.761605   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.761616   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:01.761624   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:01.761684   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:01.797467   62139 cri.go:89] found id: ""
	I0416 01:03:01.797498   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.797508   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:01.797515   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:01.797582   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:01.839910   62139 cri.go:89] found id: ""
	I0416 01:03:01.839940   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.839950   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:01.839958   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:01.840019   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:01.879572   62139 cri.go:89] found id: ""
	I0416 01:03:01.879599   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.879611   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:01.879617   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:01.879664   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:01.920190   62139 cri.go:89] found id: ""
	I0416 01:03:01.920222   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.920234   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:01.920242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:01.920300   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:01.957389   62139 cri.go:89] found id: ""
	I0416 01:03:01.957418   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.957428   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:01.957436   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:01.957507   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:01.998730   62139 cri.go:89] found id: ""
	I0416 01:03:01.998754   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.998762   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:01.998767   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:01.998812   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:02.036062   62139 cri.go:89] found id: ""
	I0416 01:03:02.036094   62139 logs.go:276] 0 containers: []
	W0416 01:03:02.036103   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:02.036112   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:02.036125   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:02.089109   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:02.089149   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:57.720792   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:00.219899   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:02.220048   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:57.832016   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:00.328867   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:02.330238   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:58.620416   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:01.121496   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:02.103312   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:02.103342   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:02.174034   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:02.174056   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:02.174069   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:02.249526   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:02.249555   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:04.795314   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:04.808294   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:04.808367   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:04.848795   62139 cri.go:89] found id: ""
	I0416 01:03:04.848825   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.848849   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:04.848857   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:04.848928   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:04.886442   62139 cri.go:89] found id: ""
	I0416 01:03:04.886477   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.886488   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:04.886502   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:04.886572   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:04.929183   62139 cri.go:89] found id: ""
	I0416 01:03:04.929215   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.929226   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:04.929234   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:04.929297   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:04.965134   62139 cri.go:89] found id: ""
	I0416 01:03:04.965172   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.965184   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:04.965191   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:04.965247   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:05.001346   62139 cri.go:89] found id: ""
	I0416 01:03:05.001373   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.001381   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:05.001387   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:05.001434   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:05.039181   62139 cri.go:89] found id: ""
	I0416 01:03:05.039210   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.039219   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:05.039224   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:05.039289   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:05.073451   62139 cri.go:89] found id: ""
	I0416 01:03:05.073479   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.073487   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:05.073494   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:05.073555   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:05.108466   62139 cri.go:89] found id: ""
	I0416 01:03:05.108495   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.108510   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:05.108520   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:05.108537   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:05.162725   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:05.162765   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:05.178152   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:05.178183   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:05.255122   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:05.255147   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:05.255161   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:05.331274   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:05.331309   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:04.220320   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:06.220475   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:04.331381   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:06.830143   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:03.620275   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:06.121293   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:07.882980   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:07.896311   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:07.896372   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:07.934632   62139 cri.go:89] found id: ""
	I0416 01:03:07.934661   62139 logs.go:276] 0 containers: []
	W0416 01:03:07.934671   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:07.934677   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:07.934745   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:07.971463   62139 cri.go:89] found id: ""
	I0416 01:03:07.971495   62139 logs.go:276] 0 containers: []
	W0416 01:03:07.971511   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:07.971518   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:07.971581   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:08.006808   62139 cri.go:89] found id: ""
	I0416 01:03:08.006839   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.006847   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:08.006852   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:08.006912   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:08.043051   62139 cri.go:89] found id: ""
	I0416 01:03:08.043082   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.043089   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:08.043095   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:08.043155   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:08.078602   62139 cri.go:89] found id: ""
	I0416 01:03:08.078638   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.078647   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:08.078655   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:08.078724   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:08.115264   62139 cri.go:89] found id: ""
	I0416 01:03:08.115293   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.115303   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:08.115311   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:08.115378   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:08.152782   62139 cri.go:89] found id: ""
	I0416 01:03:08.152814   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.152821   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:08.152826   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:08.152875   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:08.193484   62139 cri.go:89] found id: ""
	I0416 01:03:08.193506   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.193513   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:08.193522   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:08.193532   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:08.248796   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:08.248831   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:08.266054   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:08.266083   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:08.343470   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:08.343501   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:08.343515   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:08.430335   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:08.430383   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:10.972540   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:10.986911   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:10.986984   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:11.024905   62139 cri.go:89] found id: ""
	I0416 01:03:11.024939   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.024951   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:11.024958   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:11.025011   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:11.058629   62139 cri.go:89] found id: ""
	I0416 01:03:11.058654   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.058662   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:11.058667   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:11.058721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:11.093277   62139 cri.go:89] found id: ""
	I0416 01:03:11.093308   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.093317   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:11.093325   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:11.093386   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:11.131883   62139 cri.go:89] found id: ""
	I0416 01:03:11.131912   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.131924   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:11.131934   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:11.132004   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:11.175142   62139 cri.go:89] found id: ""
	I0416 01:03:11.175169   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.175179   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:11.175186   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:11.175236   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:11.209985   62139 cri.go:89] found id: ""
	I0416 01:03:11.210020   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.210031   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:11.210039   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:11.210110   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:11.246086   62139 cri.go:89] found id: ""
	I0416 01:03:11.246119   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.246129   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:11.246137   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:11.246199   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:11.286979   62139 cri.go:89] found id: ""
	I0416 01:03:11.287007   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.287019   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:11.287037   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:11.287051   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:11.364522   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:11.364557   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:11.410343   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:11.410375   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:11.459671   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:11.459703   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:11.476163   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:11.476193   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:11.549544   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:08.220881   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:10.720607   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:09.329882   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:11.330570   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:08.620817   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:11.120789   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:14.050433   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:14.065375   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:14.065431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:14.105548   62139 cri.go:89] found id: ""
	I0416 01:03:14.105571   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.105579   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:14.105583   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:14.105644   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:14.146891   62139 cri.go:89] found id: ""
	I0416 01:03:14.146915   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.146922   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:14.146927   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:14.146972   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:14.183905   62139 cri.go:89] found id: ""
	I0416 01:03:14.183937   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.183948   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:14.183954   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:14.184002   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:14.219878   62139 cri.go:89] found id: ""
	I0416 01:03:14.219905   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.219915   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:14.219922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:14.219978   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:14.256284   62139 cri.go:89] found id: ""
	I0416 01:03:14.256310   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.256317   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:14.256323   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:14.256381   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:14.295932   62139 cri.go:89] found id: ""
	I0416 01:03:14.295958   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.295966   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:14.295971   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:14.296025   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:14.333202   62139 cri.go:89] found id: ""
	I0416 01:03:14.333226   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.333235   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:14.333242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:14.333302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:14.370034   62139 cri.go:89] found id: ""
	I0416 01:03:14.370059   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.370066   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:14.370074   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:14.370092   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:14.424626   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:14.424669   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:14.441842   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:14.441872   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:14.515899   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:14.515926   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:14.515944   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:14.599956   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:14.599991   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:12.720896   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:15.220260   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:13.829944   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:16.328971   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:13.621084   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:16.120767   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:17.157610   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:17.171737   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:17.171800   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:17.214327   62139 cri.go:89] found id: ""
	I0416 01:03:17.214354   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.214364   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:17.214371   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:17.214433   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:17.255896   62139 cri.go:89] found id: ""
	I0416 01:03:17.255924   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.255939   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:17.255946   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:17.256005   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:17.298470   62139 cri.go:89] found id: ""
	I0416 01:03:17.298498   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.298512   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:17.298520   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:17.298580   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:17.338810   62139 cri.go:89] found id: ""
	I0416 01:03:17.338834   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.338842   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:17.338847   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:17.338899   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:17.375980   62139 cri.go:89] found id: ""
	I0416 01:03:17.376012   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.376019   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:17.376024   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:17.376076   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:17.411374   62139 cri.go:89] found id: ""
	I0416 01:03:17.411400   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.411408   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:17.411413   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:17.411463   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:17.452916   62139 cri.go:89] found id: ""
	I0416 01:03:17.452951   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.452962   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:17.452969   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:17.453037   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:17.492459   62139 cri.go:89] found id: ""
	I0416 01:03:17.492489   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.492500   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:17.492512   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:17.492527   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:17.541780   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:17.541814   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:17.558831   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:17.558867   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:17.635332   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:17.635351   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:17.635362   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:17.715778   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:17.715809   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:20.260621   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:20.274721   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:20.274791   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:20.311965   62139 cri.go:89] found id: ""
	I0416 01:03:20.311991   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.312002   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:20.312009   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:20.312069   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:20.350316   62139 cri.go:89] found id: ""
	I0416 01:03:20.350346   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.350356   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:20.350363   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:20.350414   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:20.404666   62139 cri.go:89] found id: ""
	I0416 01:03:20.404692   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.404700   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:20.404705   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:20.404753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:20.441223   62139 cri.go:89] found id: ""
	I0416 01:03:20.441254   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.441267   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:20.441275   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:20.441340   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:20.480535   62139 cri.go:89] found id: ""
	I0416 01:03:20.480596   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.480606   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:20.480613   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:20.480680   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:20.517520   62139 cri.go:89] found id: ""
	I0416 01:03:20.517543   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.517550   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:20.517556   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:20.517614   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:20.556067   62139 cri.go:89] found id: ""
	I0416 01:03:20.556097   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.556107   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:20.556114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:20.556177   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:20.594901   62139 cri.go:89] found id: ""
	I0416 01:03:20.594932   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.594939   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:20.594947   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:20.594958   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:20.673759   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:20.673795   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:20.721407   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:20.721443   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:20.772957   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:20.772989   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:20.787902   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:20.787932   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:20.863445   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:17.721415   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:20.221042   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:18.329421   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:20.329949   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:22.330009   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:18.122678   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:20.621127   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:22.621692   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:23.363637   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:23.377916   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:23.377991   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:23.415642   62139 cri.go:89] found id: ""
	I0416 01:03:23.415671   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.415679   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:23.415685   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:23.415732   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:23.452788   62139 cri.go:89] found id: ""
	I0416 01:03:23.452812   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.452819   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:23.452829   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:23.452878   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:23.488758   62139 cri.go:89] found id: ""
	I0416 01:03:23.488785   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.488794   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:23.488801   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:23.488862   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:23.526542   62139 cri.go:89] found id: ""
	I0416 01:03:23.526574   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.526584   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:23.526592   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:23.526661   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:23.562481   62139 cri.go:89] found id: ""
	I0416 01:03:23.562505   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.562512   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:23.562518   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:23.562579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:23.599119   62139 cri.go:89] found id: ""
	I0416 01:03:23.599145   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.599155   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:23.599162   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:23.599241   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:23.642445   62139 cri.go:89] found id: ""
	I0416 01:03:23.642474   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.642485   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:23.642492   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:23.642557   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:23.678091   62139 cri.go:89] found id: ""
	I0416 01:03:23.678113   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.678121   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:23.678129   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:23.678140   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:23.731668   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:23.731703   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:23.746413   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:23.746444   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:23.821885   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:23.821908   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:23.821923   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:23.901836   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:23.901872   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:26.444935   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:26.459240   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:26.459308   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:26.499208   62139 cri.go:89] found id: ""
	I0416 01:03:26.499237   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.499249   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:26.499256   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:26.499318   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:26.536220   62139 cri.go:89] found id: ""
	I0416 01:03:26.536258   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.536270   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:26.536277   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:26.536342   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:26.576217   62139 cri.go:89] found id: ""
	I0416 01:03:26.576241   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.576249   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:26.576254   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:26.576314   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:26.612343   62139 cri.go:89] found id: ""
	I0416 01:03:26.612369   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.612378   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:26.612385   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:26.612448   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:26.651323   62139 cri.go:89] found id: ""
	I0416 01:03:26.651353   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.651365   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:26.651384   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:26.651453   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:26.688844   62139 cri.go:89] found id: ""
	I0416 01:03:26.688874   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.688885   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:26.688891   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:26.688969   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:26.724362   62139 cri.go:89] found id: ""
	I0416 01:03:26.724387   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.724395   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:26.724401   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:26.724455   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:26.767766   62139 cri.go:89] found id: ""
	I0416 01:03:26.767795   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.767806   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:26.767816   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:26.767837   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:26.788269   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:26.788297   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:26.884802   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:26.884822   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:26.884834   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:26.964007   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:26.964044   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:27.003719   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:27.003745   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:22.720420   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:24.720865   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:26.721369   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:24.828766   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:26.830222   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:25.119674   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:27.620689   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:29.563218   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:29.579014   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:29.579078   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:29.620739   62139 cri.go:89] found id: ""
	I0416 01:03:29.620769   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.620780   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:29.620787   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:29.620850   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:29.658165   62139 cri.go:89] found id: ""
	I0416 01:03:29.658192   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.658199   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:29.658205   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:29.658252   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:29.693893   62139 cri.go:89] found id: ""
	I0416 01:03:29.693921   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.693929   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:29.693935   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:29.693985   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:29.737808   62139 cri.go:89] found id: ""
	I0416 01:03:29.737836   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.737846   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:29.737851   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:29.737910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:29.777382   62139 cri.go:89] found id: ""
	I0416 01:03:29.777408   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.777416   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:29.777422   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:29.777473   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:29.815633   62139 cri.go:89] found id: ""
	I0416 01:03:29.815659   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.815668   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:29.815682   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:29.815743   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:29.858790   62139 cri.go:89] found id: ""
	I0416 01:03:29.858820   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.858831   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:29.858839   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:29.858899   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:29.897085   62139 cri.go:89] found id: ""
	I0416 01:03:29.897120   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.897131   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:29.897142   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:29.897169   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:29.951231   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:29.951266   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:29.965539   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:29.965565   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:30.045138   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:30.045170   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:30.045186   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:30.120575   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:30.120606   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:29.220073   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:31.221145   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:29.328625   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:31.329903   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:29.621401   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:32.120604   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:32.662210   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:32.675833   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:32.675903   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:32.712104   62139 cri.go:89] found id: ""
	I0416 01:03:32.712129   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.712136   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:32.712141   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:32.712198   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:32.749617   62139 cri.go:89] found id: ""
	I0416 01:03:32.749644   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.749652   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:32.749658   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:32.749723   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:32.785069   62139 cri.go:89] found id: ""
	I0416 01:03:32.785100   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.785110   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:32.785116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:32.785191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:32.825871   62139 cri.go:89] found id: ""
	I0416 01:03:32.825912   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.825922   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:32.825928   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:32.826008   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:32.868294   62139 cri.go:89] found id: ""
	I0416 01:03:32.868321   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.868328   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:32.868334   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:32.868401   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:32.907764   62139 cri.go:89] found id: ""
	I0416 01:03:32.907789   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.907796   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:32.907802   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:32.907870   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:32.946112   62139 cri.go:89] found id: ""
	I0416 01:03:32.946137   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.946144   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:32.946155   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:32.946215   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:32.985343   62139 cri.go:89] found id: ""
	I0416 01:03:32.985374   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.985385   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:32.985395   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:32.985415   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:33.063117   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:33.063154   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:33.113739   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:33.113773   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:33.163466   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:33.163508   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:33.178368   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:33.178397   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:33.259509   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:35.760004   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:35.774161   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:35.774237   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:35.812551   62139 cri.go:89] found id: ""
	I0416 01:03:35.812580   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.812589   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:35.812594   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:35.812642   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:35.853134   62139 cri.go:89] found id: ""
	I0416 01:03:35.853177   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.853187   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:35.853195   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:35.853255   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:35.894210   62139 cri.go:89] found id: ""
	I0416 01:03:35.894246   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.894254   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:35.894259   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:35.894330   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:35.928986   62139 cri.go:89] found id: ""
	I0416 01:03:35.929010   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.929019   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:35.929027   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:35.929090   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:35.970688   62139 cri.go:89] found id: ""
	I0416 01:03:35.970712   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.970719   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:35.970725   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:35.970783   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:36.005744   62139 cri.go:89] found id: ""
	I0416 01:03:36.005771   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.005778   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:36.005783   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:36.005829   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:36.044932   62139 cri.go:89] found id: ""
	I0416 01:03:36.044966   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.044977   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:36.044984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:36.045051   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:36.080488   62139 cri.go:89] found id: ""
	I0416 01:03:36.080516   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.080527   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:36.080538   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:36.080552   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:36.132956   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:36.133000   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:36.147070   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:36.147097   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:36.226640   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:36.226670   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:36.226684   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:36.307205   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:36.307249   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:33.221952   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:35.720745   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:33.828768   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:35.830452   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:34.120695   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:36.619511   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:38.849685   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:38.863817   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:38.863897   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:38.902418   62139 cri.go:89] found id: ""
	I0416 01:03:38.902445   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.902455   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:38.902462   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:38.902533   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:38.937811   62139 cri.go:89] found id: ""
	I0416 01:03:38.937838   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.937845   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:38.937850   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:38.937900   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:38.972380   62139 cri.go:89] found id: ""
	I0416 01:03:38.972403   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.972411   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:38.972416   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:38.972466   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:39.007572   62139 cri.go:89] found id: ""
	I0416 01:03:39.007595   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.007603   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:39.007608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:39.007651   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:39.049355   62139 cri.go:89] found id: ""
	I0416 01:03:39.049382   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.049391   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:39.049398   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:39.049459   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:39.084535   62139 cri.go:89] found id: ""
	I0416 01:03:39.084565   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.084574   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:39.084581   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:39.084645   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:39.125027   62139 cri.go:89] found id: ""
	I0416 01:03:39.125055   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.125073   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:39.125080   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:39.125136   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:39.164506   62139 cri.go:89] found id: ""
	I0416 01:03:39.164537   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.164547   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:39.164557   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:39.164573   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:39.203447   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:39.203483   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:39.259087   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:39.259122   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:39.273611   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:39.273637   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:39.352372   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:39.352392   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:39.352407   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:41.938575   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:41.952937   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:41.953019   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:41.990771   62139 cri.go:89] found id: ""
	I0416 01:03:41.990802   62139 logs.go:276] 0 containers: []
	W0416 01:03:41.990811   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:41.990819   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:41.990881   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:42.027338   62139 cri.go:89] found id: ""
	I0416 01:03:42.027367   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.027374   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:42.027379   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:42.027431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:42.068348   62139 cri.go:89] found id: ""
	I0416 01:03:42.068377   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.068387   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:42.068394   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:42.068457   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:38.220198   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:40.220481   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:42.221383   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:38.330729   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:40.831615   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:38.620021   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:40.620641   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:42.620702   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:42.108157   62139 cri.go:89] found id: ""
	I0416 01:03:42.108181   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.108187   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:42.108193   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:42.108244   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:42.149749   62139 cri.go:89] found id: ""
	I0416 01:03:42.149770   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.149777   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:42.149784   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:42.149848   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:42.185322   62139 cri.go:89] found id: ""
	I0416 01:03:42.185349   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.185360   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:42.185368   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:42.185435   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:42.224334   62139 cri.go:89] found id: ""
	I0416 01:03:42.224359   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.224370   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:42.224376   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:42.224435   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:42.263466   62139 cri.go:89] found id: ""
	I0416 01:03:42.263494   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.263502   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:42.263509   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:42.263522   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:42.315106   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:42.315139   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:42.329394   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:42.329425   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:42.405267   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:42.405305   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:42.405321   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:42.486126   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:42.486168   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:45.027718   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:45.042387   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:45.042453   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:45.080790   62139 cri.go:89] found id: ""
	I0416 01:03:45.080814   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.080823   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:45.080829   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:45.080875   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:45.121278   62139 cri.go:89] found id: ""
	I0416 01:03:45.121306   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.121317   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:45.121324   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:45.121383   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:45.158076   62139 cri.go:89] found id: ""
	I0416 01:03:45.158099   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.158107   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:45.158116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:45.158162   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:45.195577   62139 cri.go:89] found id: ""
	I0416 01:03:45.195608   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.195619   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:45.195627   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:45.195685   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:45.239230   62139 cri.go:89] found id: ""
	I0416 01:03:45.239257   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.239267   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:45.239275   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:45.239326   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:45.279193   62139 cri.go:89] found id: ""
	I0416 01:03:45.279220   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.279227   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:45.279232   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:45.279280   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:45.314876   62139 cri.go:89] found id: ""
	I0416 01:03:45.314908   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.314916   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:45.314922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:45.314970   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:45.351699   62139 cri.go:89] found id: ""
	I0416 01:03:45.351723   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.351730   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:45.351738   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:45.351750   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:45.392681   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:45.392708   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:45.446564   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:45.446605   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:45.460541   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:45.460564   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:45.535287   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:45.535319   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:45.535334   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:44.720088   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:46.721511   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:43.329413   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:45.330644   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:45.123357   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:47.621806   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:48.117476   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:48.133341   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:48.133402   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:48.171230   62139 cri.go:89] found id: ""
	I0416 01:03:48.171263   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.171273   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:48.171280   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:48.171337   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:48.206188   62139 cri.go:89] found id: ""
	I0416 01:03:48.206218   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.206229   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:48.206236   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:48.206294   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:48.242349   62139 cri.go:89] found id: ""
	I0416 01:03:48.242377   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.242384   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:48.242389   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:48.242437   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:48.278324   62139 cri.go:89] found id: ""
	I0416 01:03:48.278347   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.278355   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:48.278360   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:48.278406   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:48.315727   62139 cri.go:89] found id: ""
	I0416 01:03:48.315753   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.315763   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:48.315770   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:48.315828   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:48.354146   62139 cri.go:89] found id: ""
	I0416 01:03:48.354169   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.354176   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:48.354182   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:48.354242   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:48.393951   62139 cri.go:89] found id: ""
	I0416 01:03:48.393989   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.394000   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:48.394007   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:48.394081   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:48.431849   62139 cri.go:89] found id: ""
	I0416 01:03:48.431887   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.431895   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:48.431903   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:48.431917   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:48.446210   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:48.446242   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:48.517459   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:48.517485   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:48.517500   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:48.596320   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:48.596356   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:48.639700   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:48.639733   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:51.197396   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:51.211803   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:51.211889   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:51.250768   62139 cri.go:89] found id: ""
	I0416 01:03:51.250793   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.250802   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:51.250810   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:51.250872   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:51.291389   62139 cri.go:89] found id: ""
	I0416 01:03:51.291415   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.291421   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:51.291429   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:51.291478   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:51.332466   62139 cri.go:89] found id: ""
	I0416 01:03:51.332490   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.332499   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:51.332504   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:51.332549   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:51.367731   62139 cri.go:89] found id: ""
	I0416 01:03:51.367759   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.367767   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:51.367773   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:51.367829   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:51.400567   62139 cri.go:89] found id: ""
	I0416 01:03:51.400599   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.400609   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:51.400616   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:51.400679   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:51.433561   62139 cri.go:89] found id: ""
	I0416 01:03:51.433590   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.433598   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:51.433608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:51.433666   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:51.469136   62139 cri.go:89] found id: ""
	I0416 01:03:51.469179   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.469189   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:51.469196   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:51.469255   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:51.504410   62139 cri.go:89] found id: ""
	I0416 01:03:51.504442   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.504452   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:51.504462   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:51.504480   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:51.557420   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:51.557449   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:51.571481   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:51.571506   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:51.648722   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:51.648744   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:51.648755   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:51.728945   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:51.728978   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:49.221614   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:51.721798   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:47.829985   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:50.329419   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:52.329909   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:49.622776   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:52.120080   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:54.272503   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:54.286573   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:54.286646   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:54.321084   62139 cri.go:89] found id: ""
	I0416 01:03:54.321115   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.321125   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:54.321133   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:54.321208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:54.366333   62139 cri.go:89] found id: ""
	I0416 01:03:54.366364   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.366374   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:54.366380   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:54.366437   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:54.406267   62139 cri.go:89] found id: ""
	I0416 01:03:54.406317   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.406328   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:54.406336   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:54.406405   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:54.446853   62139 cri.go:89] found id: ""
	I0416 01:03:54.446883   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.446894   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:54.446901   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:54.446956   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:54.487658   62139 cri.go:89] found id: ""
	I0416 01:03:54.487683   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.487690   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:54.487696   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:54.487753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:54.530189   62139 cri.go:89] found id: ""
	I0416 01:03:54.530216   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.530226   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:54.530232   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:54.530289   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:54.571317   62139 cri.go:89] found id: ""
	I0416 01:03:54.571341   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.571349   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:54.571354   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:54.571416   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:54.612432   62139 cri.go:89] found id: ""
	I0416 01:03:54.612458   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.612467   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:54.612478   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:54.612493   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:54.666599   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:54.666629   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:54.680880   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:54.680915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:54.757365   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:54.757386   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:54.757398   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:54.834436   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:54.834468   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:54.219690   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:56.220753   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:54.332950   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:56.830167   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:54.621002   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:56.622452   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:57.405516   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:57.420694   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:57.420773   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:57.460338   62139 cri.go:89] found id: ""
	I0416 01:03:57.460367   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.460374   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:57.460381   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:57.460442   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:57.498121   62139 cri.go:89] found id: ""
	I0416 01:03:57.498150   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.498160   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:57.498167   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:57.498228   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:57.536959   62139 cri.go:89] found id: ""
	I0416 01:03:57.536989   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.537005   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:57.537014   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:57.537077   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:57.575633   62139 cri.go:89] found id: ""
	I0416 01:03:57.575662   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.575673   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:57.575680   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:57.575743   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:57.614459   62139 cri.go:89] found id: ""
	I0416 01:03:57.614491   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.614501   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:57.614509   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:57.614568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:57.657078   62139 cri.go:89] found id: ""
	I0416 01:03:57.657109   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.657120   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:57.657127   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:57.657204   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:57.693882   62139 cri.go:89] found id: ""
	I0416 01:03:57.693904   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.693911   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:57.693922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:57.693969   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:57.731283   62139 cri.go:89] found id: ""
	I0416 01:03:57.731312   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.731320   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:57.731327   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:57.731338   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:57.782618   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:57.782656   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:57.796763   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:57.796794   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:57.869629   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:57.869652   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:57.869665   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:57.948859   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:57.948892   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:00.487682   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:00.501095   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:00.501182   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:00.537902   62139 cri.go:89] found id: ""
	I0416 01:04:00.537931   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.537939   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:00.537945   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:00.537994   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:00.574164   62139 cri.go:89] found id: ""
	I0416 01:04:00.574203   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.574214   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:00.574222   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:00.574287   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:00.629592   62139 cri.go:89] found id: ""
	I0416 01:04:00.629615   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.629622   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:00.629627   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:00.629679   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:00.672102   62139 cri.go:89] found id: ""
	I0416 01:04:00.672127   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.672134   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:00.672141   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:00.672201   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:00.715040   62139 cri.go:89] found id: ""
	I0416 01:04:00.715064   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.715072   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:00.715078   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:00.715139   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:00.751113   62139 cri.go:89] found id: ""
	I0416 01:04:00.751137   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.751146   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:00.751152   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:00.751204   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:00.787613   62139 cri.go:89] found id: ""
	I0416 01:04:00.787644   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.787653   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:00.787660   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:00.787721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:00.824244   62139 cri.go:89] found id: ""
	I0416 01:04:00.824271   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.824280   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:00.824291   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:00.824304   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:00.899977   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:00.900014   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:00.900029   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:00.982317   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:00.982350   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:01.026354   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:01.026393   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:01.080393   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:01.080441   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:58.720894   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:00.720961   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:59.329460   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:01.330171   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:59.119259   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:01.619026   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:03.595966   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:03.609190   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:03.609253   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:03.647151   62139 cri.go:89] found id: ""
	I0416 01:04:03.647183   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.647197   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:03.647203   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:03.647250   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:03.685211   62139 cri.go:89] found id: ""
	I0416 01:04:03.685239   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.685248   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:03.685254   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:03.685303   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:03.720928   62139 cri.go:89] found id: ""
	I0416 01:04:03.720949   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.720956   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:03.720961   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:03.721035   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:03.759179   62139 cri.go:89] found id: ""
	I0416 01:04:03.759210   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.759220   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:03.759228   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:03.759290   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:03.795670   62139 cri.go:89] found id: ""
	I0416 01:04:03.795700   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.795710   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:03.795717   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:03.795785   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:03.832944   62139 cri.go:89] found id: ""
	I0416 01:04:03.832971   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.832980   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:03.832988   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:03.833053   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:03.869211   62139 cri.go:89] found id: ""
	I0416 01:04:03.869238   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.869248   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:03.869256   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:03.869317   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:03.905859   62139 cri.go:89] found id: ""
	I0416 01:04:03.905888   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.905896   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:03.905904   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:03.905915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:03.957057   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:03.957088   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:03.972309   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:03.972344   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:04.049927   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:04.049950   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:04.049965   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:04.136395   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:04.136435   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:06.676667   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:06.690062   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:06.690125   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:06.733734   62139 cri.go:89] found id: ""
	I0416 01:04:06.733758   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.733773   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:06.733782   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:06.733835   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:06.773112   62139 cri.go:89] found id: ""
	I0416 01:04:06.773140   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.773147   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:06.773152   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:06.773231   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:06.812786   62139 cri.go:89] found id: ""
	I0416 01:04:06.812809   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.812817   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:06.812822   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:06.812870   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:06.853995   62139 cri.go:89] found id: ""
	I0416 01:04:06.854022   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.854029   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:06.854034   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:06.854088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:06.893809   62139 cri.go:89] found id: ""
	I0416 01:04:06.893841   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.893848   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:06.893853   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:06.893909   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:06.929389   62139 cri.go:89] found id: ""
	I0416 01:04:06.929419   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.929430   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:06.929437   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:06.929518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:06.968278   62139 cri.go:89] found id: ""
	I0416 01:04:06.968303   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.968311   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:06.968316   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:06.968364   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:07.018932   62139 cri.go:89] found id: ""
	I0416 01:04:07.018965   62139 logs.go:276] 0 containers: []
	W0416 01:04:07.018976   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:07.018989   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:07.019003   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:07.083611   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:07.083645   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:03.220314   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:05.720941   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:03.830050   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:06.329416   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:03.619482   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:05.620393   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:07.110126   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:07.110152   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:07.186262   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:07.186290   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:07.186305   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:07.263139   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:07.263170   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:09.807489   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:09.822045   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:09.822110   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:09.867444   62139 cri.go:89] found id: ""
	I0416 01:04:09.867469   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.867480   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:09.867487   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:09.867538   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:09.904280   62139 cri.go:89] found id: ""
	I0416 01:04:09.904312   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.904323   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:09.904330   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:09.904389   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:09.941066   62139 cri.go:89] found id: ""
	I0416 01:04:09.941091   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.941099   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:09.941107   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:09.941189   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:09.975739   62139 cri.go:89] found id: ""
	I0416 01:04:09.975767   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.975777   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:09.975785   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:09.975844   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:10.011414   62139 cri.go:89] found id: ""
	I0416 01:04:10.011444   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.011454   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:10.011461   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:10.011528   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:10.045670   62139 cri.go:89] found id: ""
	I0416 01:04:10.045695   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.045704   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:10.045711   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:10.045777   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:10.082320   62139 cri.go:89] found id: ""
	I0416 01:04:10.082352   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.082361   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:10.082368   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:10.082428   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:10.120453   62139 cri.go:89] found id: ""
	I0416 01:04:10.120482   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.120492   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:10.120501   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:10.120515   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:10.200213   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:10.200251   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:10.251709   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:10.251742   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:10.307348   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:10.307382   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:10.321293   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:10.321319   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:10.401361   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:08.220488   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:10.221408   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:08.331985   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:10.829244   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:08.119800   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:10.121093   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:12.126420   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:12.901763   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:12.916308   62139 kubeadm.go:591] duration metric: took 4m4.703830076s to restartPrimaryControlPlane
	W0416 01:04:12.916384   62139 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:04:12.916416   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:04:12.720462   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:14.721516   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:17.220364   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:12.830409   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:15.330184   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:14.620714   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:16.622203   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:17.897436   62139 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.980993606s)
	I0416 01:04:17.897592   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:04:17.914655   62139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:04:17.927482   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:04:17.940210   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:04:17.940233   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:04:17.940274   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:04:17.951037   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:04:17.951106   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:04:17.962341   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:04:17.972436   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:04:17.972500   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:04:17.983198   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:04:17.992856   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:04:17.992912   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:04:18.003122   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:04:18.014064   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:04:18.014117   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:04:18.024854   62139 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:04:18.101381   62139 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 01:04:18.101436   62139 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:04:18.246529   62139 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:04:18.246687   62139 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:04:18.246802   62139 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:04:18.456847   62139 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:04:18.458980   62139 out.go:204]   - Generating certificates and keys ...
	I0416 01:04:18.459096   62139 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:04:18.459190   62139 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:04:18.459294   62139 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:04:18.459381   62139 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:04:18.459473   62139 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:04:18.459548   62139 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:04:18.459631   62139 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:04:18.459721   62139 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:04:18.459822   62139 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:04:18.460281   62139 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:04:18.460387   62139 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:04:18.460475   62139 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:04:18.564910   62139 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:04:18.806406   62139 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:04:18.890124   62139 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:04:19.046415   62139 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:04:19.063159   62139 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:04:19.063301   62139 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:04:19.063415   62139 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:04:19.229066   62139 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:04:19.231110   62139 out.go:204]   - Booting up control plane ...
	I0416 01:04:19.231246   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:04:19.248833   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:04:19.250340   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:04:19.251664   62139 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:04:19.254678   62139 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:04:19.221976   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:21.720239   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:17.830011   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:18.323271   61500 pod_ready.go:81] duration metric: took 4m0.000449424s for pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace to be "Ready" ...
	E0416 01:04:18.323300   61500 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0416 01:04:18.323318   61500 pod_ready.go:38] duration metric: took 4m9.009725319s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:04:18.323357   61500 kubeadm.go:591] duration metric: took 4m19.656264138s to restartPrimaryControlPlane
	W0416 01:04:18.323420   61500 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:04:18.323449   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:04:19.122802   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:21.621389   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:24.227649   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:26.720896   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:24.119577   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:26.620166   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:29.219937   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:31.220697   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:28.622399   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:31.119279   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:33.221240   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:35.221536   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:33.124909   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:35.620718   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:37.720528   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:40.220531   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:38.120415   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:40.121126   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:42.620161   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:42.719946   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:44.720203   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:47.219782   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:44.620806   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:47.119479   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:47.613243   62747 pod_ready.go:81] duration metric: took 4m0.000098534s for pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace to be "Ready" ...
	E0416 01:04:47.613279   62747 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0416 01:04:47.613297   62747 pod_ready.go:38] duration metric: took 4m12.544704519s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:04:47.613327   62747 kubeadm.go:591] duration metric: took 4m20.76891948s to restartPrimaryControlPlane
	W0416 01:04:47.613387   62747 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:04:47.613410   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:04:50.224993   61500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.901526458s)
	I0416 01:04:50.225057   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:04:50.241083   61500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:04:50.252468   61500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:04:50.263721   61500 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:04:50.263744   61500 kubeadm.go:156] found existing configuration files:
	
	I0416 01:04:50.263786   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:04:50.274550   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:04:50.274620   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:04:50.285019   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:04:50.295079   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:04:50.295151   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:04:50.306424   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:04:50.317221   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:04:50.317286   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:04:50.327783   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:04:50.338144   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:04:50.338213   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:04:50.349262   61500 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:04:50.410467   61500 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.2
	I0416 01:04:50.410597   61500 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:04:50.565288   61500 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:04:50.565442   61500 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:04:50.565580   61500 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:04:50.783173   61500 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:04:50.785219   61500 out.go:204]   - Generating certificates and keys ...
	I0416 01:04:50.785339   61500 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:04:50.785427   61500 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:04:50.785526   61500 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:04:50.785620   61500 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:04:50.785745   61500 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:04:50.785847   61500 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:04:50.785951   61500 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:04:50.786037   61500 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:04:50.786156   61500 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:04:50.786279   61500 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:04:50.786341   61500 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:04:50.786425   61500 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:04:50.868738   61500 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:04:51.024628   61500 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 01:04:51.304801   61500 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:04:51.485803   61500 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:04:51.614330   61500 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:04:51.615043   61500 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:04:51.617465   61500 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:04:49.720594   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:51.721464   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:51.619398   61500 out.go:204]   - Booting up control plane ...
	I0416 01:04:51.619519   61500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:04:51.619637   61500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:04:51.619717   61500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:04:51.640756   61500 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:04:51.643264   61500 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:04:51.643617   61500 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:04:51.796506   61500 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0416 01:04:51.796640   61500 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0416 01:04:54.220965   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:56.222571   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:52.798698   61500 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002359416s
	I0416 01:04:52.798798   61500 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0416 01:04:57.802689   61500 kubeadm.go:309] [api-check] The API server is healthy after 5.003967397s
	I0416 01:04:57.816580   61500 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 01:04:57.840465   61500 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 01:04:57.879611   61500 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 01:04:57.879906   61500 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-572602 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 01:04:57.895211   61500 kubeadm.go:309] [bootstrap-token] Using token: w1qt2t.vu77oqcsegb1grvk
	I0416 01:04:57.896829   61500 out.go:204]   - Configuring RBAC rules ...
	I0416 01:04:57.896958   61500 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 01:04:57.905289   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 01:04:57.916967   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 01:04:57.922660   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 01:04:57.926143   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 01:04:57.935222   61500 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 01:04:58.215180   61500 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 01:04:58.656120   61500 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 01:04:59.209811   61500 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 01:04:59.211274   61500 kubeadm.go:309] 
	I0416 01:04:59.211354   61500 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 01:04:59.211390   61500 kubeadm.go:309] 
	I0416 01:04:59.211489   61500 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 01:04:59.211512   61500 kubeadm.go:309] 
	I0416 01:04:59.211556   61500 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 01:04:59.211626   61500 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 01:04:59.211695   61500 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 01:04:59.211707   61500 kubeadm.go:309] 
	I0416 01:04:59.211779   61500 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 01:04:59.211789   61500 kubeadm.go:309] 
	I0416 01:04:59.211853   61500 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 01:04:59.211921   61500 kubeadm.go:309] 
	I0416 01:04:59.212030   61500 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 01:04:59.212165   61500 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 01:04:59.212269   61500 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 01:04:59.212280   61500 kubeadm.go:309] 
	I0416 01:04:59.212407   61500 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 01:04:59.212516   61500 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 01:04:59.212525   61500 kubeadm.go:309] 
	I0416 01:04:59.212656   61500 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token w1qt2t.vu77oqcsegb1grvk \
	I0416 01:04:59.212835   61500 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0416 01:04:59.212880   61500 kubeadm.go:309] 	--control-plane 
	I0416 01:04:59.212894   61500 kubeadm.go:309] 
	I0416 01:04:59.212996   61500 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 01:04:59.213007   61500 kubeadm.go:309] 
	I0416 01:04:59.213111   61500 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token w1qt2t.vu77oqcsegb1grvk \
	I0416 01:04:59.213278   61500 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0416 01:04:59.213435   61500 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:04:59.213460   61500 cni.go:84] Creating CNI manager for ""
	I0416 01:04:59.213477   61500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:04:59.215397   61500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:04:59.255478   62139 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 01:04:59.256524   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:04:59.256807   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:04:58.720339   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:01.220968   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:59.216764   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:04:59.230134   61500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:04:59.250739   61500 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 01:04:59.250773   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:04:59.250775   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-572602 minikube.k8s.io/updated_at=2024_04_16T01_04_59_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=no-preload-572602 minikube.k8s.io/primary=true
	I0416 01:04:59.462907   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:04:59.462915   61500 ops.go:34] apiserver oom_adj: -16
	I0416 01:04:59.962977   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:00.463142   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:00.963871   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:01.463866   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:01.963356   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:02.463729   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:04.257472   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:04.257756   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:03.720762   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:05.721421   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:02.963816   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:03.463370   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:03.963655   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:04.463681   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:04.963387   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:05.462926   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:05.963659   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:06.463091   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:06.963504   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:07.463783   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:07.963037   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:08.463212   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:08.963443   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:09.463179   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:09.963188   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:10.463264   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:10.963863   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:11.463051   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:11.591367   61500 kubeadm.go:1107] duration metric: took 12.340665724s to wait for elevateKubeSystemPrivileges
	W0416 01:05:11.591410   61500 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:05:11.591425   61500 kubeadm.go:393] duration metric: took 5m12.980123227s to StartCluster
	I0416 01:05:11.591451   61500 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:11.591559   61500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:05:11.593498   61500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:11.593838   61500 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:05:11.595572   61500 out.go:177] * Verifying Kubernetes components...
	I0416 01:05:11.593961   61500 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:05:11.594060   61500 config.go:182] Loaded profile config "no-preload-572602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 01:05:11.597038   61500 addons.go:69] Setting default-storageclass=true in profile "no-preload-572602"
	I0416 01:05:11.597047   61500 addons.go:69] Setting metrics-server=true in profile "no-preload-572602"
	I0416 01:05:11.597077   61500 addons.go:234] Setting addon metrics-server=true in "no-preload-572602"
	I0416 01:05:11.597081   61500 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-572602"
	W0416 01:05:11.597084   61500 addons.go:243] addon metrics-server should already be in state true
	I0416 01:05:11.597168   61500 host.go:66] Checking if "no-preload-572602" exists ...
	I0416 01:05:11.597042   61500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:05:11.597038   61500 addons.go:69] Setting storage-provisioner=true in profile "no-preload-572602"
	I0416 01:05:11.597274   61500 addons.go:234] Setting addon storage-provisioner=true in "no-preload-572602"
	W0416 01:05:11.597281   61500 addons.go:243] addon storage-provisioner should already be in state true
	I0416 01:05:11.597300   61500 host.go:66] Checking if "no-preload-572602" exists ...
	I0416 01:05:11.597516   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.597563   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.597590   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.597621   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.597621   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.597684   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.617344   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
	I0416 01:05:11.617833   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46345
	I0416 01:05:11.617853   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.618040   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I0416 01:05:11.618170   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.618385   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.618539   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.618564   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.618682   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.618708   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.618786   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.618806   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.619020   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.619035   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.619145   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.619371   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.619629   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.619663   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.619683   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.619715   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.622758   61500 addons.go:234] Setting addon default-storageclass=true in "no-preload-572602"
	W0416 01:05:11.622784   61500 addons.go:243] addon default-storageclass should already be in state true
	I0416 01:05:11.622814   61500 host.go:66] Checking if "no-preload-572602" exists ...
	I0416 01:05:11.623148   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.623182   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.640851   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44015
	I0416 01:05:11.641427   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.642008   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.642028   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.642429   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.642635   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.643204   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I0416 01:05:11.643239   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38953
	I0416 01:05:11.643578   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.643673   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.644133   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.644150   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.644398   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.644409   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.644508   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.644786   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.644823   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.645630   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 01:05:11.645797   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.645824   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.648522   61500 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 01:05:11.646649   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 01:05:11.650173   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 01:05:11.650185   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 01:05:11.650206   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 01:05:11.652524   61500 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:05:07.721798   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:08.214615   61267 pod_ready.go:81] duration metric: took 4m0.001005317s for pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace to be "Ready" ...
	E0416 01:05:08.214650   61267 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0416 01:05:08.214688   61267 pod_ready.go:38] duration metric: took 4m14.521894608s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:08.214750   61267 kubeadm.go:591] duration metric: took 4m22.563492336s to restartPrimaryControlPlane
	W0416 01:05:08.214821   61267 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:05:08.214857   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:05:11.654173   61500 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:11.654189   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:05:11.654207   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 01:05:11.654021   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.654488   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 01:05:11.654524   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.654823   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 01:05:11.655016   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 01:05:11.655159   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 01:05:11.655331   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 01:05:11.657706   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.658193   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 01:05:11.658214   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.658388   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 01:05:11.658585   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 01:05:11.658761   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 01:05:11.658937   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 01:05:11.669485   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34717
	I0416 01:05:11.669878   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.670340   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.670352   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.670714   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.670887   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.672571   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 01:05:11.672888   61500 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:11.672900   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:05:11.672912   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 01:05:11.675816   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.676163   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 01:05:11.676182   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.676335   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 01:05:11.676513   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 01:05:11.676657   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 01:05:11.676799   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 01:05:11.822229   61500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:05:11.850495   61500 node_ready.go:35] waiting up to 6m0s for node "no-preload-572602" to be "Ready" ...
	I0416 01:05:11.868828   61500 node_ready.go:49] node "no-preload-572602" has status "Ready":"True"
	I0416 01:05:11.868852   61500 node_ready.go:38] duration metric: took 18.327813ms for node "no-preload-572602" to be "Ready" ...
	I0416 01:05:11.868860   61500 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:11.877018   61500 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.884190   61500 pod_ready.go:92] pod "etcd-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:11.884221   61500 pod_ready.go:81] duration metric: took 7.173699ms for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.884234   61500 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.901639   61500 pod_ready.go:92] pod "kube-apiserver-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:11.901672   61500 pod_ready.go:81] duration metric: took 17.430111ms for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.901684   61500 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.911839   61500 pod_ready.go:92] pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:11.911871   61500 pod_ready.go:81] duration metric: took 10.178219ms for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.911885   61500 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.936265   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 01:05:11.936293   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 01:05:11.939406   61500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:11.942233   61500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:11.963094   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 01:05:11.963123   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 01:05:12.027316   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:12.027341   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 01:05:12.150413   61500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:12.387284   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.387310   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.387640   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.387665   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.387674   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.387682   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.387973   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.387991   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.395148   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.395179   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.395459   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:12.395488   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.395508   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.930331   61500 pod_ready.go:92] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:12.930362   61500 pod_ready.go:81] duration metric: took 1.01846846s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:12.930373   61500 pod_ready.go:38] duration metric: took 1.061502471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:12.930390   61500 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:05:12.930454   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:05:12.990840   61500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.048571147s)
	I0416 01:05:12.990905   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.990919   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.991246   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:12.991309   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.991323   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.991380   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.991391   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.991617   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:12.991669   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.991690   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:13.719959   61500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.569495387s)
	I0416 01:05:13.720018   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:13.720023   61500 api_server.go:72] duration metric: took 2.12614679s to wait for apiserver process to appear ...
	I0416 01:05:13.720046   61500 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:05:13.720066   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:05:13.720034   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:13.720435   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:13.720458   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:13.720469   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:13.720472   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:13.720477   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:13.720670   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:13.720681   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:13.720691   61500 addons.go:470] Verifying addon metrics-server=true in "no-preload-572602"
	I0416 01:05:13.722348   61500 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0416 01:05:13.723686   61500 addons.go:505] duration metric: took 2.129734353s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0416 01:05:13.764481   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 200:
	ok
	I0416 01:05:13.771661   61500 api_server.go:141] control plane version: v1.30.0-rc.2
	I0416 01:05:13.771690   61500 api_server.go:131] duration metric: took 51.637739ms to wait for apiserver health ...
	I0416 01:05:13.771698   61500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:05:13.812701   61500 system_pods.go:59] 9 kube-system pods found
	I0416 01:05:13.812744   61500 system_pods.go:61] "coredns-7db6d8ff4d-2b5ht" [b8d48a4c-6efd-409a-98be-3ec5bf639470] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.812753   61500 system_pods.go:61] "coredns-7db6d8ff4d-p62sn" [36768eb2-2a22-48e1-b271-f262aa64e014] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.812761   61500 system_pods.go:61] "etcd-no-preload-572602" [c9ed4f86-07f3-48d6-948c-8c4243920512] Running
	I0416 01:05:13.812765   61500 system_pods.go:61] "kube-apiserver-no-preload-572602" [a92513a3-4129-41a2-a603-4a69f4e72041] Running
	I0416 01:05:13.812768   61500 system_pods.go:61] "kube-controller-manager-no-preload-572602" [ce013e5b-5d3c-42de-8a00-c7041288740b] Running
	I0416 01:05:13.812774   61500 system_pods.go:61] "kube-proxy-6cjlc" [2c4d9303-8c08-4385-a6b9-63dda0d9a274] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0416 01:05:13.812777   61500 system_pods.go:61] "kube-scheduler-no-preload-572602" [a9f71ca2-f211-4e6d-9940-4e0af5d4287e] Running
	I0416 01:05:13.812783   61500 system_pods.go:61] "metrics-server-569cc877fc-5j5rc" [3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:13.812792   61500 system_pods.go:61] "storage-provisioner" [b9ac9c93-0e50-4598-a9c4-a12e4ff14063] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:13.812802   61500 system_pods.go:74] duration metric: took 41.098881ms to wait for pod list to return data ...
	I0416 01:05:13.812811   61500 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:05:13.847288   61500 default_sa.go:45] found service account: "default"
	I0416 01:05:13.847323   61500 default_sa.go:55] duration metric: took 34.500938ms for default service account to be created ...
	I0416 01:05:13.847335   61500 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:05:13.877107   61500 system_pods.go:86] 9 kube-system pods found
	I0416 01:05:13.877150   61500 system_pods.go:89] "coredns-7db6d8ff4d-2b5ht" [b8d48a4c-6efd-409a-98be-3ec5bf639470] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.877175   61500 system_pods.go:89] "coredns-7db6d8ff4d-p62sn" [36768eb2-2a22-48e1-b271-f262aa64e014] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.877185   61500 system_pods.go:89] "etcd-no-preload-572602" [c9ed4f86-07f3-48d6-948c-8c4243920512] Running
	I0416 01:05:13.877194   61500 system_pods.go:89] "kube-apiserver-no-preload-572602" [a92513a3-4129-41a2-a603-4a69f4e72041] Running
	I0416 01:05:13.877200   61500 system_pods.go:89] "kube-controller-manager-no-preload-572602" [ce013e5b-5d3c-42de-8a00-c7041288740b] Running
	I0416 01:05:13.877209   61500 system_pods.go:89] "kube-proxy-6cjlc" [2c4d9303-8c08-4385-a6b9-63dda0d9a274] Running
	I0416 01:05:13.877215   61500 system_pods.go:89] "kube-scheduler-no-preload-572602" [a9f71ca2-f211-4e6d-9940-4e0af5d4287e] Running
	I0416 01:05:13.877224   61500 system_pods.go:89] "metrics-server-569cc877fc-5j5rc" [3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:13.877237   61500 system_pods.go:89] "storage-provisioner" [b9ac9c93-0e50-4598-a9c4-a12e4ff14063] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:13.877257   61500 retry.go:31] will retry after 239.706522ms: missing components: kube-dns
	I0416 01:05:14.128770   61500 system_pods.go:86] 9 kube-system pods found
	I0416 01:05:14.128814   61500 system_pods.go:89] "coredns-7db6d8ff4d-2b5ht" [b8d48a4c-6efd-409a-98be-3ec5bf639470] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:14.128827   61500 system_pods.go:89] "coredns-7db6d8ff4d-p62sn" [36768eb2-2a22-48e1-b271-f262aa64e014] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:14.128836   61500 system_pods.go:89] "etcd-no-preload-572602" [c9ed4f86-07f3-48d6-948c-8c4243920512] Running
	I0416 01:05:14.128850   61500 system_pods.go:89] "kube-apiserver-no-preload-572602" [a92513a3-4129-41a2-a603-4a69f4e72041] Running
	I0416 01:05:14.128857   61500 system_pods.go:89] "kube-controller-manager-no-preload-572602" [ce013e5b-5d3c-42de-8a00-c7041288740b] Running
	I0416 01:05:14.128864   61500 system_pods.go:89] "kube-proxy-6cjlc" [2c4d9303-8c08-4385-a6b9-63dda0d9a274] Running
	I0416 01:05:14.128871   61500 system_pods.go:89] "kube-scheduler-no-preload-572602" [a9f71ca2-f211-4e6d-9940-4e0af5d4287e] Running
	I0416 01:05:14.128885   61500 system_pods.go:89] "metrics-server-569cc877fc-5j5rc" [3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:14.128893   61500 system_pods.go:89] "storage-provisioner" [b9ac9c93-0e50-4598-a9c4-a12e4ff14063] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:14.128903   61500 system_pods.go:126] duration metric: took 281.561287ms to wait for k8s-apps to be running ...
	I0416 01:05:14.128912   61500 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:05:14.128978   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:14.145557   61500 system_svc.go:56] duration metric: took 16.639555ms WaitForService to wait for kubelet
	I0416 01:05:14.145582   61500 kubeadm.go:576] duration metric: took 2.551711031s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:05:14.145605   61500 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:05:14.149984   61500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:05:14.150009   61500 node_conditions.go:123] node cpu capacity is 2
	I0416 01:05:14.150021   61500 node_conditions.go:105] duration metric: took 4.410684ms to run NodePressure ...
	I0416 01:05:14.150034   61500 start.go:240] waiting for startup goroutines ...
	I0416 01:05:14.150044   61500 start.go:245] waiting for cluster config update ...
	I0416 01:05:14.150064   61500 start.go:254] writing updated cluster config ...
	I0416 01:05:14.150354   61500 ssh_runner.go:195] Run: rm -f paused
	I0416 01:05:14.198605   61500 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.2 (minor skew: 1)
	I0416 01:05:14.200584   61500 out.go:177] * Done! kubectl is now configured to use "no-preload-572602" cluster and "default" namespace by default
	I0416 01:05:14.258629   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:14.258807   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:19.748784   62747 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.135339447s)
	I0416 01:05:19.748866   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:19.766280   62747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:05:19.777541   62747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:05:19.788086   62747 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:05:19.788112   62747 kubeadm.go:156] found existing configuration files:
	
	I0416 01:05:19.788154   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:05:19.798135   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:05:19.798211   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:05:19.809231   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:05:19.819447   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:05:19.819519   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:05:19.830223   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:05:19.840460   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:05:19.840528   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:05:19.851506   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:05:19.861422   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:05:19.861481   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:05:19.871239   62747 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:05:20.089849   62747 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:05:29.079351   62747 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 01:05:29.079435   62747 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:05:29.079534   62747 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:05:29.079679   62747 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:05:29.079817   62747 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:05:29.079934   62747 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:05:29.081701   62747 out.go:204]   - Generating certificates and keys ...
	I0416 01:05:29.081801   62747 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:05:29.081922   62747 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:05:29.082035   62747 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:05:29.082125   62747 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:05:29.082300   62747 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:05:29.082404   62747 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:05:29.082504   62747 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:05:29.082556   62747 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:05:29.082621   62747 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:05:29.082737   62747 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:05:29.082798   62747 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:05:29.082867   62747 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:05:29.082955   62747 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:05:29.083042   62747 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 01:05:29.083129   62747 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:05:29.083209   62747 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:05:29.083278   62747 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:05:29.083385   62747 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:05:29.083467   62747 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:05:29.085050   62747 out.go:204]   - Booting up control plane ...
	I0416 01:05:29.085178   62747 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:05:29.085289   62747 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:05:29.085374   62747 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:05:29.085499   62747 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:05:29.085610   62747 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:05:29.085671   62747 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:05:29.085942   62747 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:05:29.086066   62747 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003717 seconds
	I0416 01:05:29.086227   62747 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 01:05:29.086384   62747 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 01:05:29.086474   62747 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 01:05:29.086755   62747 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-617092 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 01:05:29.086843   62747 kubeadm.go:309] [bootstrap-token] Using token: 33ihar.pt6l329bwmm6yhnr
	I0416 01:05:29.088273   62747 out.go:204]   - Configuring RBAC rules ...
	I0416 01:05:29.088408   62747 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 01:05:29.088516   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 01:05:29.088712   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 01:05:29.088898   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 01:05:29.089046   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 01:05:29.089196   62747 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 01:05:29.089346   62747 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 01:05:29.089413   62747 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 01:05:29.089486   62747 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 01:05:29.089496   62747 kubeadm.go:309] 
	I0416 01:05:29.089581   62747 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 01:05:29.089591   62747 kubeadm.go:309] 
	I0416 01:05:29.089707   62747 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 01:05:29.089719   62747 kubeadm.go:309] 
	I0416 01:05:29.089768   62747 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 01:05:29.089855   62747 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 01:05:29.089932   62747 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 01:05:29.089942   62747 kubeadm.go:309] 
	I0416 01:05:29.090020   62747 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 01:05:29.090041   62747 kubeadm.go:309] 
	I0416 01:05:29.090111   62747 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 01:05:29.090120   62747 kubeadm.go:309] 
	I0416 01:05:29.090193   62747 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 01:05:29.090350   62747 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 01:05:29.090434   62747 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 01:05:29.090445   62747 kubeadm.go:309] 
	I0416 01:05:29.090560   62747 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 01:05:29.090661   62747 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 01:05:29.090667   62747 kubeadm.go:309] 
	I0416 01:05:29.090773   62747 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 33ihar.pt6l329bwmm6yhnr \
	I0416 01:05:29.090921   62747 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0416 01:05:29.090942   62747 kubeadm.go:309] 	--control-plane 
	I0416 01:05:29.090948   62747 kubeadm.go:309] 
	I0416 01:05:29.091017   62747 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 01:05:29.091034   62747 kubeadm.go:309] 
	I0416 01:05:29.091153   62747 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 33ihar.pt6l329bwmm6yhnr \
	I0416 01:05:29.091299   62747 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0416 01:05:29.091313   62747 cni.go:84] Creating CNI manager for ""
	I0416 01:05:29.091323   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:05:29.094154   62747 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:05:29.095747   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:05:29.153706   62747 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:05:29.195477   62747 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 01:05:29.195540   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:29.195540   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-617092 minikube.k8s.io/updated_at=2024_04_16T01_05_29_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=embed-certs-617092 minikube.k8s.io/primary=true
	I0416 01:05:29.551888   62747 ops.go:34] apiserver oom_adj: -16
	I0416 01:05:29.552023   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:30.053117   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:30.552298   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:31.052317   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:31.553057   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:32.052852   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:32.552921   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:34.259492   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:34.259704   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:33.052747   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:33.552301   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:34.052922   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:34.552338   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:35.052106   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:35.552911   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:36.052814   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:36.552077   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:37.052666   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:37.552057   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:38.053198   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:38.552163   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:39.052589   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:39.552701   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:40.053069   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:40.552436   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:41.053071   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:41.158552   62747 kubeadm.go:1107] duration metric: took 11.963074905s to wait for elevateKubeSystemPrivileges
	W0416 01:05:41.158601   62747 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:05:41.158611   62747 kubeadm.go:393] duration metric: took 5m14.369080866s to StartCluster
	I0416 01:05:41.158638   62747 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:41.158736   62747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:05:41.160903   62747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:41.161229   62747 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:05:41.163312   62747 out.go:177] * Verifying Kubernetes components...
	I0416 01:05:40.562916   61267 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.348033752s)
	I0416 01:05:40.562991   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:40.580700   61267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:05:40.592069   61267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:05:40.606450   61267 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:05:40.606477   61267 kubeadm.go:156] found existing configuration files:
	
	I0416 01:05:40.606531   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0416 01:05:40.617547   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:05:40.617622   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:05:40.631465   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0416 01:05:40.644464   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:05:40.644553   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:05:40.655929   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0416 01:05:40.664995   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:05:40.665059   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:05:40.674477   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0416 01:05:40.683500   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:05:40.683570   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:05:40.693774   61267 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:05:40.753612   61267 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 01:05:40.753717   61267 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:05:40.911483   61267 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:05:40.911609   61267 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:05:40.911748   61267 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:05:41.170137   61267 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:05:41.161331   62747 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:05:41.161434   62747 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:05:41.165023   62747 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-617092"
	I0416 01:05:41.165044   62747 addons.go:69] Setting metrics-server=true in profile "embed-certs-617092"
	I0416 01:05:41.165081   62747 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-617092"
	I0416 01:05:41.165084   62747 addons.go:234] Setting addon metrics-server=true in "embed-certs-617092"
	W0416 01:05:41.165090   62747 addons.go:243] addon storage-provisioner should already be in state true
	W0416 01:05:41.165091   62747 addons.go:243] addon metrics-server should already be in state true
	I0416 01:05:41.165117   62747 host.go:66] Checking if "embed-certs-617092" exists ...
	I0416 01:05:41.165052   62747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:05:41.165025   62747 addons.go:69] Setting default-storageclass=true in profile "embed-certs-617092"
	I0416 01:05:41.165117   62747 host.go:66] Checking if "embed-certs-617092" exists ...
	I0416 01:05:41.165174   62747 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-617092"
	I0416 01:05:41.165464   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.165480   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.165549   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.165569   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.165549   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.165651   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.183063   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46083
	I0416 01:05:41.183551   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.184135   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.184158   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.184578   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.185298   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.185337   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.185763   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43633
	I0416 01:05:41.185823   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46197
	I0416 01:05:41.186233   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.186400   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.186701   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.186726   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.186861   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.186881   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.187211   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.187233   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.187415   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.187763   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.187781   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.191018   62747 addons.go:234] Setting addon default-storageclass=true in "embed-certs-617092"
	W0416 01:05:41.191038   62747 addons.go:243] addon default-storageclass should already be in state true
	I0416 01:05:41.191068   62747 host.go:66] Checking if "embed-certs-617092" exists ...
	I0416 01:05:41.191346   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.191384   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.202643   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45285
	I0416 01:05:41.203122   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.203607   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.203627   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.203952   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.204124   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.204325   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45643
	I0416 01:05:41.204721   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.205188   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.205207   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.205860   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.206056   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.206084   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:05:41.208051   62747 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 01:05:41.209179   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 01:05:41.209197   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 01:05:41.207724   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:05:41.209214   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:05:41.210728   62747 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:05:41.171860   61267 out.go:204]   - Generating certificates and keys ...
	I0416 01:05:41.171969   61267 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:05:41.172043   61267 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:05:41.172139   61267 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:05:41.172803   61267 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:05:41.173065   61267 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:05:41.173653   61267 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:05:41.174077   61267 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:05:41.174586   61267 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:05:41.175034   61267 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:05:41.175570   61267 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:05:41.175888   61267 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:05:41.175968   61267 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:05:41.439471   61267 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:05:41.524693   61267 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 01:05:42.001762   61267 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:05:42.139805   61267 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:05:42.198091   61267 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:05:42.198762   61267 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:05:42.202915   61267 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:05:42.204549   61267 out.go:204]   - Booting up control plane ...
	I0416 01:05:42.204673   61267 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:05:42.204816   61267 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:05:42.205761   61267 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:05:42.225187   61267 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:05:42.225917   61267 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:05:42.225972   61267 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:05:42.367087   61267 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:05:41.210575   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34385
	I0416 01:05:41.211905   62747 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:41.211923   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:05:41.211942   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:05:41.212835   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.212972   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.213577   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.213597   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.213610   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:05:41.213628   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.214039   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.214657   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.214693   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.215005   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:05:41.215635   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.215905   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:05:41.215933   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.216058   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:05:41.216109   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:05:41.216242   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:05:41.216303   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:05:41.216447   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:05:41.216466   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:05:41.216544   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:05:41.236284   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40007
	I0416 01:05:41.237670   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.238270   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.238288   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.241258   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.241453   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.243397   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:05:41.243724   62747 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:41.243740   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:05:41.243758   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:05:41.247426   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.248034   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:05:41.248144   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.248423   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:05:41.249376   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:05:41.249600   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:05:41.249799   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:05:41.414823   62747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:05:41.436007   62747 node_ready.go:35] waiting up to 6m0s for node "embed-certs-617092" to be "Ready" ...
	I0416 01:05:41.452344   62747 node_ready.go:49] node "embed-certs-617092" has status "Ready":"True"
	I0416 01:05:41.452370   62747 node_ready.go:38] duration metric: took 16.328329ms for node "embed-certs-617092" to be "Ready" ...
	I0416 01:05:41.452382   62747 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:41.467673   62747 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.477985   62747 pod_ready.go:92] pod "etcd-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:41.478019   62747 pod_ready.go:81] duration metric: took 10.312538ms for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.478032   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.485978   62747 pod_ready.go:92] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:41.486003   62747 pod_ready.go:81] duration metric: took 7.961029ms for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.486015   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.491586   62747 pod_ready.go:92] pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:41.491608   62747 pod_ready.go:81] duration metric: took 5.584682ms for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.491619   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p4rh9" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.591874   62747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:41.630528   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 01:05:41.630554   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 01:05:41.653822   62747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:41.718742   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 01:05:41.718775   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 01:05:41.750701   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:41.750725   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 01:05:41.798873   62747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:41.961373   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:41.961415   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:41.961857   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:41.961879   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:41.961890   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:41.961909   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:41.962200   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:41.962205   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:41.962216   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:41.974163   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:41.974189   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:41.974517   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:41.974537   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:42.721070   62747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.067206266s)
	I0416 01:05:42.721119   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:42.721130   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:42.721551   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:42.721594   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:42.721613   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:42.721636   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:42.721648   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:42.721972   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:42.721987   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:42.722006   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:43.123544   62747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.324616723s)
	I0416 01:05:43.123593   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:43.123608   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:43.123867   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:43.123906   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:43.123913   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:43.123922   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:43.123928   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:43.124218   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:43.124234   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:43.124234   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:43.124255   62747 addons.go:470] Verifying addon metrics-server=true in "embed-certs-617092"
	I0416 01:05:43.125829   62747 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0416 01:05:43.127138   62747 addons.go:505] duration metric: took 1.965815007s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0416 01:05:43.536374   62747 pod_ready.go:102] pod "kube-proxy-p4rh9" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:44.000571   62747 pod_ready.go:92] pod "kube-proxy-p4rh9" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:44.000594   62747 pod_ready.go:81] duration metric: took 2.508967748s for pod "kube-proxy-p4rh9" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:44.000603   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:44.006516   62747 pod_ready.go:92] pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:44.006540   62747 pod_ready.go:81] duration metric: took 5.930755ms for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:44.006546   62747 pod_ready.go:38] duration metric: took 2.554153393s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:44.006560   62747 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:05:44.006612   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:05:44.030705   62747 api_server.go:72] duration metric: took 2.869432993s to wait for apiserver process to appear ...
	I0416 01:05:44.030737   62747 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:05:44.030759   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:05:44.035576   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 200:
	ok
	I0416 01:05:44.037948   62747 api_server.go:141] control plane version: v1.29.3
	I0416 01:05:44.037973   62747 api_server.go:131] duration metric: took 7.228106ms to wait for apiserver health ...
	I0416 01:05:44.037983   62747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:05:44.044543   62747 system_pods.go:59] 9 kube-system pods found
	I0416 01:05:44.044574   62747 system_pods.go:61] "coredns-76f75df574-2q58l" [e9b9d000-738b-4110-8757-17f76197285c] Running
	I0416 01:05:44.044581   62747 system_pods.go:61] "coredns-76f75df574-h8k4k" [1b114848-1137-4215-a966-03db39e4de23] Running
	I0416 01:05:44.044586   62747 system_pods.go:61] "etcd-embed-certs-617092" [f65e9307-4e12-4ac4-baca-7e1cfd7415d5] Running
	I0416 01:05:44.044591   62747 system_pods.go:61] "kube-apiserver-embed-certs-617092" [f55e02ce-45cf-4f6e-b8d7-7f305f22ea52] Running
	I0416 01:05:44.044596   62747 system_pods.go:61] "kube-controller-manager-embed-certs-617092" [d16739c1-36f4-4748-8533-fcc6cea0adee] Running
	I0416 01:05:44.044601   62747 system_pods.go:61] "kube-proxy-p4rh9" [42041028-d085-4ec4-8213-da3af0d5290e] Running
	I0416 01:05:44.044606   62747 system_pods.go:61] "kube-scheduler-embed-certs-617092" [d61e24fe-a5e3-41bf-b212-75764a036a26] Running
	I0416 01:05:44.044614   62747 system_pods.go:61] "metrics-server-57f55c9bc5-j5clp" [99808b2d-344f-43b7-a29c-01f0a2026aa8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:44.044623   62747 system_pods.go:61] "storage-provisioner" [5a62c0f7-0b15-48f3-9c17-d5966d39fbd5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:44.044635   62747 system_pods.go:74] duration metric: took 6.6454ms to wait for pod list to return data ...
	I0416 01:05:44.044652   62747 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:05:44.241344   62747 default_sa.go:45] found service account: "default"
	I0416 01:05:44.241370   62747 default_sa.go:55] duration metric: took 196.710973ms for default service account to be created ...
	I0416 01:05:44.241379   62747 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:05:44.450798   62747 system_pods.go:86] 9 kube-system pods found
	I0416 01:05:44.450825   62747 system_pods.go:89] "coredns-76f75df574-2q58l" [e9b9d000-738b-4110-8757-17f76197285c] Running
	I0416 01:05:44.450831   62747 system_pods.go:89] "coredns-76f75df574-h8k4k" [1b114848-1137-4215-a966-03db39e4de23] Running
	I0416 01:05:44.450835   62747 system_pods.go:89] "etcd-embed-certs-617092" [f65e9307-4e12-4ac4-baca-7e1cfd7415d5] Running
	I0416 01:05:44.450839   62747 system_pods.go:89] "kube-apiserver-embed-certs-617092" [f55e02ce-45cf-4f6e-b8d7-7f305f22ea52] Running
	I0416 01:05:44.450844   62747 system_pods.go:89] "kube-controller-manager-embed-certs-617092" [d16739c1-36f4-4748-8533-fcc6cea0adee] Running
	I0416 01:05:44.450848   62747 system_pods.go:89] "kube-proxy-p4rh9" [42041028-d085-4ec4-8213-da3af0d5290e] Running
	I0416 01:05:44.450851   62747 system_pods.go:89] "kube-scheduler-embed-certs-617092" [d61e24fe-a5e3-41bf-b212-75764a036a26] Running
	I0416 01:05:44.450858   62747 system_pods.go:89] "metrics-server-57f55c9bc5-j5clp" [99808b2d-344f-43b7-a29c-01f0a2026aa8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:44.450864   62747 system_pods.go:89] "storage-provisioner" [5a62c0f7-0b15-48f3-9c17-d5966d39fbd5] Running
	I0416 01:05:44.450871   62747 system_pods.go:126] duration metric: took 209.487599ms to wait for k8s-apps to be running ...
	I0416 01:05:44.450889   62747 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:05:44.450943   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:44.470820   62747 system_svc.go:56] duration metric: took 19.925743ms WaitForService to wait for kubelet
	I0416 01:05:44.470853   62747 kubeadm.go:576] duration metric: took 3.309585995s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:05:44.470876   62747 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:05:44.642093   62747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:05:44.642123   62747 node_conditions.go:123] node cpu capacity is 2
	I0416 01:05:44.642135   62747 node_conditions.go:105] duration metric: took 171.253415ms to run NodePressure ...
	I0416 01:05:44.642149   62747 start.go:240] waiting for startup goroutines ...
	I0416 01:05:44.642158   62747 start.go:245] waiting for cluster config update ...
	I0416 01:05:44.642171   62747 start.go:254] writing updated cluster config ...
	I0416 01:05:44.642519   62747 ssh_runner.go:195] Run: rm -f paused
	I0416 01:05:44.707141   62747 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 01:05:44.709274   62747 out.go:177] * Done! kubectl is now configured to use "embed-certs-617092" cluster and "default" namespace by default
	I0416 01:05:48.372574   61267 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002543 seconds
	I0416 01:05:48.385076   61267 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 01:05:48.406058   61267 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 01:05:48.938329   61267 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 01:05:48.938556   61267 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-653942 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 01:05:49.458321   61267 kubeadm.go:309] [bootstrap-token] Using token: 5ddaoe.tvzldvzlkbeta1a9
	I0416 01:05:49.459891   61267 out.go:204]   - Configuring RBAC rules ...
	I0416 01:05:49.460064   61267 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 01:05:49.465799   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 01:05:49.477346   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 01:05:49.482154   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 01:05:49.485769   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 01:05:49.489199   61267 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 01:05:49.504774   61267 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 01:05:49.770133   61267 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 01:05:49.872777   61267 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 01:05:49.874282   61267 kubeadm.go:309] 
	I0416 01:05:49.874384   61267 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 01:05:49.874400   61267 kubeadm.go:309] 
	I0416 01:05:49.874560   61267 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 01:05:49.874580   61267 kubeadm.go:309] 
	I0416 01:05:49.874602   61267 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 01:05:49.874673   61267 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 01:05:49.874754   61267 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 01:05:49.874766   61267 kubeadm.go:309] 
	I0416 01:05:49.874853   61267 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 01:05:49.874878   61267 kubeadm.go:309] 
	I0416 01:05:49.874944   61267 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 01:05:49.874956   61267 kubeadm.go:309] 
	I0416 01:05:49.875019   61267 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 01:05:49.875141   61267 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 01:05:49.875246   61267 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 01:05:49.875257   61267 kubeadm.go:309] 
	I0416 01:05:49.875432   61267 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 01:05:49.875552   61267 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 01:05:49.875562   61267 kubeadm.go:309] 
	I0416 01:05:49.875657   61267 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 5ddaoe.tvzldvzlkbeta1a9 \
	I0416 01:05:49.875754   61267 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0416 01:05:49.875774   61267 kubeadm.go:309] 	--control-plane 
	I0416 01:05:49.875780   61267 kubeadm.go:309] 
	I0416 01:05:49.875859   61267 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 01:05:49.875869   61267 kubeadm.go:309] 
	I0416 01:05:49.875949   61267 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 5ddaoe.tvzldvzlkbeta1a9 \
	I0416 01:05:49.876085   61267 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0416 01:05:49.876640   61267 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:05:49.876666   61267 cni.go:84] Creating CNI manager for ""
	I0416 01:05:49.876676   61267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:05:49.878703   61267 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:05:49.880070   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:05:49.897752   61267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:05:49.969146   61267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 01:05:49.969228   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:49.969228   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-653942 minikube.k8s.io/updated_at=2024_04_16T01_05_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=default-k8s-diff-port-653942 minikube.k8s.io/primary=true
	I0416 01:05:50.233119   61267 ops.go:34] apiserver oom_adj: -16
	I0416 01:05:50.233262   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:50.733748   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:51.234361   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:51.733704   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:52.233367   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:52.733789   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:53.234012   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:53.733458   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:54.233341   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:54.734148   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:55.233710   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:55.734135   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:56.233315   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:56.734162   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:57.233899   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:57.733337   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:58.234101   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:58.734357   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:59.233831   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:59.733286   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:00.233847   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:00.733872   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:01.233935   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:01.733629   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:02.233967   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:02.734163   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:03.233294   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:03.412834   61267 kubeadm.go:1107] duration metric: took 13.44368469s to wait for elevateKubeSystemPrivileges
	W0416 01:06:03.412896   61267 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:06:03.412907   61267 kubeadm.go:393] duration metric: took 5m17.8108087s to StartCluster
	I0416 01:06:03.412926   61267 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:06:03.413003   61267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:06:03.414974   61267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:06:03.415299   61267 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.216 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:06:03.417148   61267 out.go:177] * Verifying Kubernetes components...
	I0416 01:06:03.415390   61267 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:06:03.415510   61267 config.go:182] Loaded profile config "default-k8s-diff-port-653942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:06:03.417238   61267 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-653942"
	I0416 01:06:03.419134   61267 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-653942"
	W0416 01:06:03.419147   61267 addons.go:243] addon storage-provisioner should already be in state true
	I0416 01:06:03.417247   61267 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-653942"
	I0416 01:06:03.419188   61267 host.go:66] Checking if "default-k8s-diff-port-653942" exists ...
	I0416 01:06:03.419214   61267 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-653942"
	I0416 01:06:03.417245   61267 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-653942"
	I0416 01:06:03.419095   61267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0416 01:06:03.419262   61267 addons.go:243] addon metrics-server should already be in state true
	I0416 01:06:03.419307   61267 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-653942"
	I0416 01:06:03.419327   61267 host.go:66] Checking if "default-k8s-diff-port-653942" exists ...
	I0416 01:06:03.419606   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.419644   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.419662   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.419698   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.419722   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.419756   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.435784   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
	I0416 01:06:03.435800   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37505
	I0416 01:06:03.436294   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.436296   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.436811   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.436838   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.437097   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.437115   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.437203   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.437683   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.437757   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.437790   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.438213   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33329
	I0416 01:06:03.438248   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.438273   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.438786   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.439301   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.439332   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.439810   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.440162   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.443879   61267 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-653942"
	W0416 01:06:03.443906   61267 addons.go:243] addon default-storageclass should already be in state true
	I0416 01:06:03.443941   61267 host.go:66] Checking if "default-k8s-diff-port-653942" exists ...
	I0416 01:06:03.444301   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.444340   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.454673   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I0416 01:06:03.455111   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.455715   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.455742   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.456116   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.456318   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.457870   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39341
	I0416 01:06:03.458086   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:06:03.458278   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.462516   61267 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 01:06:03.458862   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.460354   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0416 01:06:03.464491   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 01:06:03.464509   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 01:06:03.464529   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:06:03.464551   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.464960   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.465281   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.465552   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.466181   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.466205   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.466760   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.467410   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.467435   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.467638   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:06:03.469647   61267 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:06:03.471009   61267 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:06:03.471024   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:06:03.469242   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.471040   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:06:03.469767   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:06:03.471070   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:06:03.471133   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.471297   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:06:03.471478   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:06:03.471661   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:06:03.473778   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.474203   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:06:03.474226   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.474421   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:06:03.474605   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:06:03.474784   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:06:03.474958   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:06:03.485829   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0416 01:06:03.486293   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.486876   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.486900   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.487362   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.487535   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.489207   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:06:03.489529   61267 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:06:03.489549   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:06:03.489568   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:06:03.492570   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.492932   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:06:03.492958   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.493224   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:06:03.493379   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:06:03.493557   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:06:03.493673   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:06:03.680085   61267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:06:03.724011   61267 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-653942" to be "Ready" ...
	I0416 01:06:03.739131   61267 node_ready.go:49] node "default-k8s-diff-port-653942" has status "Ready":"True"
	I0416 01:06:03.739152   61267 node_ready.go:38] duration metric: took 15.111832ms for node "default-k8s-diff-port-653942" to be "Ready" ...
	I0416 01:06:03.739161   61267 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:06:03.748081   61267 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-5nnpv" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:03.810063   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 01:06:03.810090   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 01:06:03.812595   61267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:06:03.848165   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 01:06:03.848187   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 01:06:03.991110   61267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:06:03.997100   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:06:03.997133   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 01:06:04.093267   61267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:06:04.349978   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:04.350011   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:04.350336   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:04.350396   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:04.350415   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:04.350420   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:04.350425   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:04.350683   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:04.350699   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:04.416648   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:04.416674   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:04.416982   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:04.417001   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.206973   61267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113663167s)
	I0416 01:06:05.207025   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207040   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207039   61267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.215892308s)
	I0416 01:06:05.207078   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207090   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207371   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.207388   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.207397   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207405   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207445   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.207462   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:05.207466   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.207490   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207508   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207610   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.207644   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.207654   61267 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-653942"
	I0416 01:06:05.207654   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:05.209411   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:05.209402   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.209469   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.212071   61267 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0416 01:06:05.213412   61267 addons.go:505] duration metric: took 1.798038731s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0416 01:06:05.256497   61267 pod_ready.go:92] pod "coredns-76f75df574-5nnpv" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.256526   61267 pod_ready.go:81] duration metric: took 1.508419977s for pod "coredns-76f75df574-5nnpv" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.256538   61267 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-zpnhs" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.262092   61267 pod_ready.go:92] pod "coredns-76f75df574-zpnhs" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.262112   61267 pod_ready.go:81] duration metric: took 5.566499ms for pod "coredns-76f75df574-zpnhs" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.262121   61267 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.267256   61267 pod_ready.go:92] pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.267278   61267 pod_ready.go:81] duration metric: took 5.149782ms for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.267286   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.272119   61267 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.272144   61267 pod_ready.go:81] duration metric: took 4.851008ms for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.272155   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.328440   61267 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.328470   61267 pod_ready.go:81] duration metric: took 56.30531ms for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.328482   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mg5km" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.729518   61267 pod_ready.go:92] pod "kube-proxy-mg5km" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.729544   61267 pod_ready.go:81] duration metric: took 401.055058ms for pod "kube-proxy-mg5km" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.729553   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:06.127535   61267 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:06.127558   61267 pod_ready.go:81] duration metric: took 397.998988ms for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:06.127565   61267 pod_ready.go:38] duration metric: took 2.388395448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:06:06.127577   61267 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:06:06.127620   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:06:06.150179   61267 api_server.go:72] duration metric: took 2.734842767s to wait for apiserver process to appear ...
	I0416 01:06:06.150208   61267 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:06:06.150226   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:06:06.154310   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 200:
	ok
	I0416 01:06:06.155393   61267 api_server.go:141] control plane version: v1.29.3
	I0416 01:06:06.155409   61267 api_server.go:131] duration metric: took 5.194458ms to wait for apiserver health ...
	I0416 01:06:06.155421   61267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:06:06.333873   61267 system_pods.go:59] 9 kube-system pods found
	I0416 01:06:06.333909   61267 system_pods.go:61] "coredns-76f75df574-5nnpv" [3350aca5-639e-44a1-bd84-d1e4b6486143] Running
	I0416 01:06:06.333914   61267 system_pods.go:61] "coredns-76f75df574-zpnhs" [990672b6-bb3a-4f91-8de7-7c2ec224c94a] Running
	I0416 01:06:06.333917   61267 system_pods.go:61] "etcd-default-k8s-diff-port-653942" [e72e89e9-c274-4d4d-b1f9-43bea95cd015] Running
	I0416 01:06:06.333920   61267 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653942" [c1652126-b4c2-41cf-a574-9784f7800374] Running
	I0416 01:06:06.333923   61267 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653942" [1f43936c-ba39-44f9-b9b7-2a149f26a880] Running
	I0416 01:06:06.333926   61267 system_pods.go:61] "kube-proxy-mg5km" [74764194-1f31-40b1-90b5-497e248ab7da] Running
	I0416 01:06:06.333929   61267 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653942" [48058ade-c30d-4dc9-b6c0-32b2ed5fc88a] Running
	I0416 01:06:06.333935   61267 system_pods.go:61] "metrics-server-57f55c9bc5-6jn29" [1eec2ffb-ce59-45cb-b6b4-cd010549510e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:06:06.333938   61267 system_pods.go:61] "storage-provisioner" [d131c1fc-9124-4b46-a16f-a8fb5029a57b] Running
	I0416 01:06:06.333947   61267 system_pods.go:74] duration metric: took 178.520515ms to wait for pod list to return data ...
	I0416 01:06:06.333953   61267 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:06:06.528119   61267 default_sa.go:45] found service account: "default"
	I0416 01:06:06.528148   61267 default_sa.go:55] duration metric: took 194.18786ms for default service account to be created ...
	I0416 01:06:06.528158   61267 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:06:06.731573   61267 system_pods.go:86] 9 kube-system pods found
	I0416 01:06:06.731600   61267 system_pods.go:89] "coredns-76f75df574-5nnpv" [3350aca5-639e-44a1-bd84-d1e4b6486143] Running
	I0416 01:06:06.731606   61267 system_pods.go:89] "coredns-76f75df574-zpnhs" [990672b6-bb3a-4f91-8de7-7c2ec224c94a] Running
	I0416 01:06:06.731610   61267 system_pods.go:89] "etcd-default-k8s-diff-port-653942" [e72e89e9-c274-4d4d-b1f9-43bea95cd015] Running
	I0416 01:06:06.731614   61267 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-653942" [c1652126-b4c2-41cf-a574-9784f7800374] Running
	I0416 01:06:06.731619   61267 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-653942" [1f43936c-ba39-44f9-b9b7-2a149f26a880] Running
	I0416 01:06:06.731622   61267 system_pods.go:89] "kube-proxy-mg5km" [74764194-1f31-40b1-90b5-497e248ab7da] Running
	I0416 01:06:06.731626   61267 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-653942" [48058ade-c30d-4dc9-b6c0-32b2ed5fc88a] Running
	I0416 01:06:06.731633   61267 system_pods.go:89] "metrics-server-57f55c9bc5-6jn29" [1eec2ffb-ce59-45cb-b6b4-cd010549510e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:06:06.731638   61267 system_pods.go:89] "storage-provisioner" [d131c1fc-9124-4b46-a16f-a8fb5029a57b] Running
	I0416 01:06:06.731649   61267 system_pods.go:126] duration metric: took 203.485273ms to wait for k8s-apps to be running ...
	I0416 01:06:06.731659   61267 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:06:06.731700   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:06:06.749013   61267 system_svc.go:56] duration metric: took 17.343008ms WaitForService to wait for kubelet
	I0416 01:06:06.749048   61267 kubeadm.go:576] duration metric: took 3.333716529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:06:06.749072   61267 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:06:06.927701   61267 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:06:06.927725   61267 node_conditions.go:123] node cpu capacity is 2
	I0416 01:06:06.927735   61267 node_conditions.go:105] duration metric: took 178.65899ms to run NodePressure ...
	I0416 01:06:06.927746   61267 start.go:240] waiting for startup goroutines ...
	I0416 01:06:06.927754   61267 start.go:245] waiting for cluster config update ...
	I0416 01:06:06.927763   61267 start.go:254] writing updated cluster config ...
	I0416 01:06:06.928000   61267 ssh_runner.go:195] Run: rm -f paused
	I0416 01:06:06.978823   61267 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 01:06:06.981011   61267 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-653942" cluster and "default" namespace by default
	I0416 01:06:14.261576   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:06:14.261834   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:06:14.261849   62139 kubeadm.go:309] 
	I0416 01:06:14.261890   62139 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 01:06:14.261973   62139 kubeadm.go:309] 		timed out waiting for the condition
	I0416 01:06:14.262006   62139 kubeadm.go:309] 
	I0416 01:06:14.262051   62139 kubeadm.go:309] 	This error is likely caused by:
	I0416 01:06:14.262082   62139 kubeadm.go:309] 		- The kubelet is not running
	I0416 01:06:14.262174   62139 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 01:06:14.262199   62139 kubeadm.go:309] 
	I0416 01:06:14.262357   62139 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 01:06:14.262414   62139 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 01:06:14.262471   62139 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 01:06:14.262481   62139 kubeadm.go:309] 
	I0416 01:06:14.262610   62139 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 01:06:14.262707   62139 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 01:06:14.262717   62139 kubeadm.go:309] 
	I0416 01:06:14.262867   62139 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 01:06:14.263010   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 01:06:14.263142   62139 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 01:06:14.263211   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 01:06:14.263234   62139 kubeadm.go:309] 
	I0416 01:06:14.264084   62139 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:06:14.264204   62139 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 01:06:14.264312   62139 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0416 01:06:14.264460   62139 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0416 01:06:14.264526   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:06:15.653692   62139 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.389136497s)
	I0416 01:06:15.653831   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:06:15.669141   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:06:15.679485   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:06:15.679511   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:06:15.679556   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:06:15.689898   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:06:15.689974   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:06:15.700563   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:06:15.710363   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:06:15.710445   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:06:15.719877   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:06:15.728947   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:06:15.729002   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:06:15.739360   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:06:15.749479   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:06:15.749557   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:06:15.760930   62139 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:06:16.000974   62139 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:08:12.327133   62139 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 01:08:12.327246   62139 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0416 01:08:12.328995   62139 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 01:08:12.329092   62139 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:08:12.329220   62139 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:08:12.329302   62139 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:08:12.329440   62139 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:08:12.329537   62139 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:08:12.331381   62139 out.go:204]   - Generating certificates and keys ...
	I0416 01:08:12.331474   62139 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:08:12.331558   62139 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:08:12.331658   62139 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:08:12.331742   62139 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:08:12.331830   62139 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:08:12.331910   62139 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:08:12.331968   62139 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:08:12.332020   62139 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:08:12.332085   62139 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:08:12.332159   62139 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:08:12.332210   62139 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:08:12.332297   62139 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:08:12.332376   62139 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:08:12.332466   62139 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:08:12.332547   62139 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:08:12.332642   62139 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:08:12.332790   62139 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:08:12.332895   62139 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:08:12.332938   62139 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:08:12.333002   62139 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:08:12.334632   62139 out.go:204]   - Booting up control plane ...
	I0416 01:08:12.334737   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:08:12.334837   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:08:12.334928   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:08:12.335009   62139 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:08:12.335162   62139 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:08:12.335241   62139 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 01:08:12.335333   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.335541   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.335613   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.335771   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.335848   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336035   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336109   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336365   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336438   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336704   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336716   62139 kubeadm.go:309] 
	I0416 01:08:12.336779   62139 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 01:08:12.336827   62139 kubeadm.go:309] 		timed out waiting for the condition
	I0416 01:08:12.336834   62139 kubeadm.go:309] 
	I0416 01:08:12.336883   62139 kubeadm.go:309] 	This error is likely caused by:
	I0416 01:08:12.336922   62139 kubeadm.go:309] 		- The kubelet is not running
	I0416 01:08:12.337025   62139 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 01:08:12.337036   62139 kubeadm.go:309] 
	I0416 01:08:12.337145   62139 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 01:08:12.337211   62139 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 01:08:12.337245   62139 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 01:08:12.337253   62139 kubeadm.go:309] 
	I0416 01:08:12.337340   62139 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 01:08:12.337428   62139 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 01:08:12.337436   62139 kubeadm.go:309] 
	I0416 01:08:12.337529   62139 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 01:08:12.337602   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 01:08:12.337701   62139 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 01:08:12.337870   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 01:08:12.337957   62139 kubeadm.go:393] duration metric: took 8m4.174818047s to StartCluster
	I0416 01:08:12.337969   62139 kubeadm.go:309] 
	I0416 01:08:12.338009   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:08:12.338067   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:08:12.391937   62139 cri.go:89] found id: ""
	I0416 01:08:12.391963   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.391986   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:08:12.391994   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:08:12.392072   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:08:12.430575   62139 cri.go:89] found id: ""
	I0416 01:08:12.430602   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.430616   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:08:12.430623   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:08:12.430685   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:08:12.469115   62139 cri.go:89] found id: ""
	I0416 01:08:12.469143   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.469152   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:08:12.469173   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:08:12.469228   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:08:12.508599   62139 cri.go:89] found id: ""
	I0416 01:08:12.508630   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.508640   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:08:12.508648   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:08:12.508698   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:08:12.547785   62139 cri.go:89] found id: ""
	I0416 01:08:12.547817   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.547829   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:08:12.547836   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:08:12.547910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:08:12.599526   62139 cri.go:89] found id: ""
	I0416 01:08:12.599549   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.599557   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:08:12.599563   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:08:12.599612   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:08:12.639914   62139 cri.go:89] found id: ""
	I0416 01:08:12.639944   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.639954   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:08:12.639962   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:08:12.640041   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:08:12.676025   62139 cri.go:89] found id: ""
	I0416 01:08:12.676057   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.676066   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:08:12.676079   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:08:12.676100   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:08:12.774744   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:08:12.774769   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:08:12.774785   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:08:12.902751   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:08:12.902787   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:08:12.947370   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:08:12.947406   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:08:13.002186   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:08:13.002223   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0416 01:08:13.017193   62139 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0416 01:08:13.017234   62139 out.go:239] * 
	W0416 01:08:13.017283   62139 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 01:08:13.017304   62139 out.go:239] * 
	W0416 01:08:13.018151   62139 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 01:08:13.021371   62139 out.go:177] 
	W0416 01:08:13.022572   62139 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 01:08:13.022640   62139 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0416 01:08:13.022670   62139 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0416 01:08:13.024248   62139 out.go:177] 
	
	
	==> CRI-O <==
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.760653424Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713229694760615901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e9cf8418-1107-434f-be55-f0a55d3e5da0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.761396420Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02a2f23a-622d-40f7-839f-b94528f44d00 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.761445827Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02a2f23a-622d-40f7-839f-b94528f44d00 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.761480658Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=02a2f23a-622d-40f7-839f-b94528f44d00 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.794484854Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1fab2371-bedd-4769-8f6f-32e0f2cff247 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.794555217Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1fab2371-bedd-4769-8f6f-32e0f2cff247 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.795952218Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=205775ad-5187-4eb4-9a73-e9eb41fada8e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.796307680Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713229694796283355,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=205775ad-5187-4eb4-9a73-e9eb41fada8e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.796871460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0585de7-9c86-4e29-9908-0beff9e559a3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.796921827Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0585de7-9c86-4e29-9908-0beff9e559a3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.796955101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e0585de7-9c86-4e29-9908-0beff9e559a3 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.830860913Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5cfc1d9e-e1b9-4867-aebf-e182cf91af3a name=/runtime.v1.RuntimeService/Version
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.830972210Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5cfc1d9e-e1b9-4867-aebf-e182cf91af3a name=/runtime.v1.RuntimeService/Version
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.832261983Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7efa6adb-426d-4826-a21b-b9bee8814823 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.832648049Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713229694832626144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7efa6adb-426d-4826-a21b-b9bee8814823 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.833188361Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9dc29e31-dd0a-423b-ad7b-e95359949d43 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.833248016Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9dc29e31-dd0a-423b-ad7b-e95359949d43 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.833329197Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9dc29e31-dd0a-423b-ad7b-e95359949d43 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.868926543Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=82d1b807-defb-4127-8694-2f0a18460b60 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.869031551Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=82d1b807-defb-4127-8694-2f0a18460b60 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.870078876Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dc1de616-1626-4ace-bee7-883b5569e438 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.870577335Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713229694870548263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dc1de616-1626-4ace-bee7-883b5569e438 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.871118854Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8082d3d9-1b8a-4e6a-bb75-5e283a8911c7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.871172294Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8082d3d9-1b8a-4e6a-bb75-5e283a8911c7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:08:14 old-k8s-version-800769 crio[651]: time="2024-04-16 01:08:14.871213332Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=8082d3d9-1b8a-4e6a-bb75-5e283a8911c7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr16 00:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052487] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041260] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.659381] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.701128] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.498139] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.532362] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.139625] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.184218] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.154369] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[Apr16 01:00] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.904893] systemd-fstab-generator[836]: Ignoring "noauto" option for root device
	[  +0.058661] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.131661] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[ +13.736441] kauditd_printk_skb: 46 callbacks suppressed
	[Apr16 01:04] systemd-fstab-generator[5023]: Ignoring "noauto" option for root device
	[Apr16 01:06] systemd-fstab-generator[5299]: Ignoring "noauto" option for root device
	[  +0.072728] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:08:15 up 8 min,  0 users,  load average: 0.00, 0.04, 0.01
	Linux old-k8s-version-800769 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5475]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5475]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000be0ea0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000b96fc0, 0x24, 0x0, ...)
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5475]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5475]: net.(*Dialer).DialContext(0xc000384d20, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b96fc0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5475]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5475]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000a6b860, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b96fc0, 0x24, 0x60, 0x7fdd682ee990, 0x118, ...)
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5475]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5475]: net/http.(*Transport).dial(0xc00081f180, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b96fc0, 0x24, 0x0, 0x0, 0x0, ...)
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5475]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5475]: net/http.(*Transport).dialConn(0xc00081f180, 0x4f7fe00, 0xc000120018, 0x0, 0xc00040e6c0, 0x5, 0xc000b96fc0, 0x24, 0x0, 0xc000aa3c20, ...)
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5475]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5475]: net/http.(*Transport).dialConnFor(0xc00081f180, 0xc000ac5e40)
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5475]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5475]: created by net/http.(*Transport).queueForDial
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5475]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Apr 16 01:08:12 old-k8s-version-800769 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 16 01:08:12 old-k8s-version-800769 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 16 01:08:12 old-k8s-version-800769 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Apr 16 01:08:12 old-k8s-version-800769 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 16 01:08:12 old-k8s-version-800769 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5529]: I0416 01:08:12.854096    5529 server.go:416] Version: v1.20.0
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5529]: I0416 01:08:12.854341    5529 server.go:837] Client rotation is on, will bootstrap in background
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5529]: I0416 01:08:12.856379    5529 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5529]: I0416 01:08:12.857502    5529 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Apr 16 01:08:12 old-k8s-version-800769 kubelet[5529]: W0416 01:08:12.857666    5529 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-800769 -n old-k8s-version-800769
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-800769 -n old-k8s-version-800769: exit status 2 (240.167815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-800769" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (714.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-617092 -n embed-certs-617092
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-617092 -n embed-certs-617092: exit status 3 (3.167880142s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:58:33.529530   62637 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.225:22: connect: no route to host
	E0416 00:58:33.529552   62637 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.225:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-617092 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-617092 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153020642s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.225:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-617092 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-617092 -n embed-certs-617092
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-617092 -n embed-certs-617092: exit status 3 (3.062607348s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0416 00:58:42.745573   62717 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.225:22: connect: no route to host
	E0416 00:58:42.745594   62717 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.225:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-617092" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0416 01:05:21.726842   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-572602 -n no-preload-572602
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-16 01:14:14.76846648 +0000 UTC m=+5797.285227794
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-572602 -n no-preload-572602
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-572602 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-572602 logs -n 25: (2.083856453s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p cert-expiration-359535                              | cert-expiration-359535       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:52 UTC | 16 Apr 24 00:52 UTC |
	| start   | -p newest-cni-012509 --memory=2200 --alsologtostderr   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:52 UTC | 16 Apr 24 00:53 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p newest-cni-012509             | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-012509                  | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-012509 --memory=2200 --alsologtostderr   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:54 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| image   | newest-cni-012509 image list                           | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --format=json                                          |                              |         |                |                     |                     |
	| pause   | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	| delete  | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	| delete  | -p                                                     | disable-driver-mounts-988802 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | disable-driver-mounts-988802                           |                              |         |                |                     |                     |
	| start   | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653942       | default-k8s-diff-port-653942 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-572602                  | no-preload-572602            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653942 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 01:06 UTC |
	|         | default-k8s-diff-port-653942                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-800769        | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| start   | -p no-preload-572602                                   | no-preload-572602            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 01:05 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-617092            | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-800769                              | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-800769             | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-800769                              | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-617092                 | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:58 UTC | 16 Apr 24 01:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 00:58:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 00:58:42.797832   62747 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:58:42.797983   62747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:58:42.797994   62747 out.go:304] Setting ErrFile to fd 2...
	I0416 00:58:42.797998   62747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:58:42.798182   62747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:58:42.798686   62747 out.go:298] Setting JSON to false
	I0416 00:58:42.799629   62747 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6067,"bootTime":1713223056,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 00:58:42.799687   62747 start.go:139] virtualization: kvm guest
	I0416 00:58:42.801878   62747 out.go:177] * [embed-certs-617092] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 00:58:42.803202   62747 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 00:58:42.804389   62747 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 00:58:42.803288   62747 notify.go:220] Checking for updates...
	I0416 00:58:42.805742   62747 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 00:58:42.807023   62747 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:58:42.808185   62747 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 00:58:42.809402   62747 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 00:58:42.811188   62747 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:58:42.811772   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:58:42.811833   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:58:42.826377   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44973
	I0416 00:58:42.826730   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:58:42.827217   62747 main.go:141] libmachine: Using API Version  1
	I0416 00:58:42.827233   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:58:42.827541   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:58:42.827737   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 00:58:42.827964   62747 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 00:58:42.828239   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:58:42.828274   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:58:42.842499   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
	I0416 00:58:42.842872   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:58:42.843283   62747 main.go:141] libmachine: Using API Version  1
	I0416 00:58:42.843300   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:58:42.843636   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:58:42.843830   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 00:58:42.874583   62747 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 00:58:42.875910   62747 start.go:297] selected driver: kvm2
	I0416 00:58:42.875933   62747 start.go:901] validating driver "kvm2" against &{Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:58:42.876072   62747 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 00:58:42.876741   62747 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:58:42.876826   62747 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 00:58:42.890834   62747 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 00:58:42.891212   62747 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 00:58:42.891270   62747 cni.go:84] Creating CNI manager for ""
	I0416 00:58:42.891283   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:58:42.891314   62747 start.go:340] cluster config:
	{Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-617092 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:58:42.891412   62747 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:58:42.893179   62747 out.go:177] * Starting "embed-certs-617092" primary control-plane node in "embed-certs-617092" cluster
	I0416 00:58:42.894232   62747 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 00:58:42.894260   62747 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 00:58:42.894267   62747 cache.go:56] Caching tarball of preloaded images
	I0416 00:58:42.894353   62747 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 00:58:42.894365   62747 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 00:58:42.894458   62747 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/config.json ...
	I0416 00:58:42.894628   62747 start.go:360] acquireMachinesLock for embed-certs-617092: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:58:47.545405   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:58:50.617454   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:58:56.697459   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:58:59.769461   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:05.849462   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:08.921459   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:15.001430   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:21.078070   61500 start.go:364] duration metric: took 4m33.431027521s to acquireMachinesLock for "no-preload-572602"
	I0416 00:59:21.078134   61500 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:59:21.078152   61500 fix.go:54] fixHost starting: 
	I0416 00:59:21.078760   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:59:21.078809   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:59:21.093476   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36767
	I0416 00:59:21.093934   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:59:21.094422   61500 main.go:141] libmachine: Using API Version  1
	I0416 00:59:21.094448   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:59:21.094749   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:59:21.094902   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:21.095048   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 00:59:21.096678   61500 fix.go:112] recreateIfNeeded on no-preload-572602: state=Stopped err=<nil>
	I0416 00:59:21.096697   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	W0416 00:59:21.096846   61500 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:59:21.098527   61500 out.go:177] * Restarting existing kvm2 VM for "no-preload-572602" ...
	I0416 00:59:18.073453   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:21.075633   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:59:21.075671   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 00:59:21.075991   61267 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653942"
	I0416 00:59:21.076014   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 00:59:21.076225   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 00:59:21.077923   61267 machine.go:97] duration metric: took 4m34.542024225s to provisionDockerMachine
	I0416 00:59:21.077967   61267 fix.go:56] duration metric: took 4m34.567596715s for fixHost
	I0416 00:59:21.077978   61267 start.go:83] releasing machines lock for "default-k8s-diff-port-653942", held for 4m34.567645643s
	W0416 00:59:21.078001   61267 start.go:713] error starting host: provision: host is not running
	W0416 00:59:21.078088   61267 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0416 00:59:21.078097   61267 start.go:728] Will try again in 5 seconds ...
	I0416 00:59:21.099788   61500 main.go:141] libmachine: (no-preload-572602) Calling .Start
	I0416 00:59:21.099966   61500 main.go:141] libmachine: (no-preload-572602) Ensuring networks are active...
	I0416 00:59:21.100656   61500 main.go:141] libmachine: (no-preload-572602) Ensuring network default is active
	I0416 00:59:21.100937   61500 main.go:141] libmachine: (no-preload-572602) Ensuring network mk-no-preload-572602 is active
	I0416 00:59:21.101282   61500 main.go:141] libmachine: (no-preload-572602) Getting domain xml...
	I0416 00:59:21.101905   61500 main.go:141] libmachine: (no-preload-572602) Creating domain...
	I0416 00:59:22.294019   61500 main.go:141] libmachine: (no-preload-572602) Waiting to get IP...
	I0416 00:59:22.294922   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:22.295294   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:22.295349   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:22.295262   62936 retry.go:31] will retry after 220.952312ms: waiting for machine to come up
	I0416 00:59:22.517753   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:22.518334   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:22.518358   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:22.518287   62936 retry.go:31] will retry after 377.547009ms: waiting for machine to come up
	I0416 00:59:26.081716   61267 start.go:360] acquireMachinesLock for default-k8s-diff-port-653942: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:59:22.897924   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:22.898442   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:22.898465   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:22.898394   62936 retry.go:31] will retry after 450.415086ms: waiting for machine to come up
	I0416 00:59:23.349893   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:23.350383   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:23.350420   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:23.350333   62936 retry.go:31] will retry after 385.340718ms: waiting for machine to come up
	I0416 00:59:23.736854   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:23.737225   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:23.737262   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:23.737205   62936 retry.go:31] will retry after 696.175991ms: waiting for machine to come up
	I0416 00:59:24.435231   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:24.435587   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:24.435616   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:24.435557   62936 retry.go:31] will retry after 644.402152ms: waiting for machine to come up
	I0416 00:59:25.081355   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:25.081660   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:25.081697   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:25.081626   62936 retry.go:31] will retry after 809.585997ms: waiting for machine to come up
	I0416 00:59:25.892402   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:25.892767   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:25.892797   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:25.892722   62936 retry.go:31] will retry after 1.07477705s: waiting for machine to come up
	I0416 00:59:26.969227   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:26.969617   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:26.969646   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:26.969561   62936 retry.go:31] will retry after 1.243937595s: waiting for machine to come up
	I0416 00:59:28.214995   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:28.215412   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:28.215433   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:28.215364   62936 retry.go:31] will retry after 1.775188434s: waiting for machine to come up
	I0416 00:59:29.993420   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:29.993825   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:29.993853   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:29.993779   62936 retry.go:31] will retry after 2.73873778s: waiting for machine to come up
	I0416 00:59:32.735350   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:32.735758   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:32.735809   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:32.735721   62936 retry.go:31] will retry after 2.208871896s: waiting for machine to come up
	I0416 00:59:34.947005   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:34.947400   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:34.947431   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:34.947358   62936 retry.go:31] will retry after 4.484880009s: waiting for machine to come up
	I0416 00:59:40.669954   62139 start.go:364] duration metric: took 3m18.466569456s to acquireMachinesLock for "old-k8s-version-800769"
	I0416 00:59:40.670015   62139 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:59:40.670038   62139 fix.go:54] fixHost starting: 
	I0416 00:59:40.670411   62139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:59:40.670448   62139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:59:40.686269   62139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39043
	I0416 00:59:40.686633   62139 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:59:40.687125   62139 main.go:141] libmachine: Using API Version  1
	I0416 00:59:40.687162   62139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:59:40.687481   62139 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:59:40.687672   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:40.687838   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetState
	I0416 00:59:40.689108   62139 fix.go:112] recreateIfNeeded on old-k8s-version-800769: state=Stopped err=<nil>
	I0416 00:59:40.689132   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	W0416 00:59:40.689286   62139 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:59:40.691869   62139 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-800769" ...
	I0416 00:59:40.693292   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .Start
	I0416 00:59:40.693450   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring networks are active...
	I0416 00:59:40.694152   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring network default is active
	I0416 00:59:40.694457   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring network mk-old-k8s-version-800769 is active
	I0416 00:59:40.694883   62139 main.go:141] libmachine: (old-k8s-version-800769) Getting domain xml...
	I0416 00:59:40.695720   62139 main.go:141] libmachine: (old-k8s-version-800769) Creating domain...
	I0416 00:59:41.913001   62139 main.go:141] libmachine: (old-k8s-version-800769) Waiting to get IP...
	I0416 00:59:41.913874   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:41.914260   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:41.914318   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:41.914237   63071 retry.go:31] will retry after 261.032707ms: waiting for machine to come up
	I0416 00:59:39.436244   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.436664   61500 main.go:141] libmachine: (no-preload-572602) Found IP for machine: 192.168.39.121
	I0416 00:59:39.436686   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has current primary IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.436694   61500 main.go:141] libmachine: (no-preload-572602) Reserving static IP address...
	I0416 00:59:39.437114   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "no-preload-572602", mac: "52:54:00:fb:a5:f3", ip: "192.168.39.121"} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.437151   61500 main.go:141] libmachine: (no-preload-572602) Reserved static IP address: 192.168.39.121
	I0416 00:59:39.437183   61500 main.go:141] libmachine: (no-preload-572602) DBG | skip adding static IP to network mk-no-preload-572602 - found existing host DHCP lease matching {name: "no-preload-572602", mac: "52:54:00:fb:a5:f3", ip: "192.168.39.121"}
	I0416 00:59:39.437197   61500 main.go:141] libmachine: (no-preload-572602) Waiting for SSH to be available...
	I0416 00:59:39.437215   61500 main.go:141] libmachine: (no-preload-572602) DBG | Getting to WaitForSSH function...
	I0416 00:59:39.439255   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.439613   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.439642   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.439723   61500 main.go:141] libmachine: (no-preload-572602) DBG | Using SSH client type: external
	I0416 00:59:39.439756   61500 main.go:141] libmachine: (no-preload-572602) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa (-rw-------)
	I0416 00:59:39.439799   61500 main.go:141] libmachine: (no-preload-572602) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.121 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 00:59:39.439822   61500 main.go:141] libmachine: (no-preload-572602) DBG | About to run SSH command:
	I0416 00:59:39.439835   61500 main.go:141] libmachine: (no-preload-572602) DBG | exit 0
	I0416 00:59:39.565190   61500 main.go:141] libmachine: (no-preload-572602) DBG | SSH cmd err, output: <nil>: 
	I0416 00:59:39.565584   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetConfigRaw
	I0416 00:59:39.566223   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:39.568572   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.568869   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.568906   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.569083   61500 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/config.json ...
	I0416 00:59:39.569300   61500 machine.go:94] provisionDockerMachine start ...
	I0416 00:59:39.569318   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:39.569526   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.571536   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.571842   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.571868   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.572004   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:39.572189   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.572352   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.572505   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:39.572751   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:39.572974   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:39.572991   61500 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:59:39.681544   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 00:59:39.681574   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetMachineName
	I0416 00:59:39.681845   61500 buildroot.go:166] provisioning hostname "no-preload-572602"
	I0416 00:59:39.681874   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetMachineName
	I0416 00:59:39.682088   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.684694   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.685029   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.685063   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.685259   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:39.685453   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.685608   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.685737   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:39.685887   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:39.686066   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:39.686090   61500 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-572602 && echo "no-preload-572602" | sudo tee /etc/hostname
	I0416 00:59:39.804124   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-572602
	
	I0416 00:59:39.804149   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.807081   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.807447   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.807480   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.807651   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:39.807860   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.808048   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.808202   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:39.808393   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:39.808618   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:39.808644   61500 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-572602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-572602/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-572602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:59:39.921781   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:59:39.921824   61500 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:59:39.921847   61500 buildroot.go:174] setting up certificates
	I0416 00:59:39.921857   61500 provision.go:84] configureAuth start
	I0416 00:59:39.921872   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetMachineName
	I0416 00:59:39.922150   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:39.924726   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.925052   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.925081   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.925199   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.927315   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.927820   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.927869   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.927934   61500 provision.go:143] copyHostCerts
	I0416 00:59:39.928005   61500 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:59:39.928031   61500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:59:39.928122   61500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:59:39.928231   61500 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:59:39.928241   61500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:59:39.928284   61500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:59:39.928370   61500 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:59:39.928379   61500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:59:39.928428   61500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:59:39.928498   61500 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.no-preload-572602 san=[127.0.0.1 192.168.39.121 localhost minikube no-preload-572602]
	I0416 00:59:40.000129   61500 provision.go:177] copyRemoteCerts
	I0416 00:59:40.000200   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:59:40.000236   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.002726   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.003028   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.003057   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.003168   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.003351   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.003471   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.003577   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.087468   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:59:40.115336   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0416 00:59:40.142695   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 00:59:40.169631   61500 provision.go:87] duration metric: took 247.759459ms to configureAuth
	I0416 00:59:40.169657   61500 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:59:40.169824   61500 config.go:182] Loaded profile config "no-preload-572602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 00:59:40.169906   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.172164   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.172503   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.172531   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.172689   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.172875   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.173033   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.173182   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.173311   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:40.173465   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:40.173480   61500 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:59:40.437143   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:59:40.437182   61500 machine.go:97] duration metric: took 867.868152ms to provisionDockerMachine
	I0416 00:59:40.437194   61500 start.go:293] postStartSetup for "no-preload-572602" (driver="kvm2")
	I0416 00:59:40.437211   61500 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:59:40.437233   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.437536   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:59:40.437564   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.440246   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.440596   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.440637   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.440759   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.440981   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.441186   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.441319   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.524157   61500 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:59:40.528556   61500 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:59:40.528580   61500 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:59:40.528647   61500 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:59:40.528756   61500 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:59:40.528877   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:59:40.538275   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:59:40.562693   61500 start.go:296] duration metric: took 125.48438ms for postStartSetup
	I0416 00:59:40.562728   61500 fix.go:56] duration metric: took 19.484586221s for fixHost
	I0416 00:59:40.562746   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.565410   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.565717   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.565756   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.565920   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.566103   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.566269   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.566438   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.566587   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:40.566738   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:40.566749   61500 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 00:59:40.669778   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229180.641382554
	
	I0416 00:59:40.669802   61500 fix.go:216] guest clock: 1713229180.641382554
	I0416 00:59:40.669811   61500 fix.go:229] Guest: 2024-04-16 00:59:40.641382554 +0000 UTC Remote: 2024-04-16 00:59:40.56273146 +0000 UTC m=+293.069651959 (delta=78.651094ms)
	I0416 00:59:40.669839   61500 fix.go:200] guest clock delta is within tolerance: 78.651094ms
	I0416 00:59:40.669857   61500 start.go:83] releasing machines lock for "no-preload-572602", held for 19.591740017s
	I0416 00:59:40.669883   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.670163   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:40.672800   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.673187   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.673234   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.673386   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.673841   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.673993   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.674067   61500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:59:40.674115   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.674155   61500 ssh_runner.go:195] Run: cat /version.json
	I0416 00:59:40.674174   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.676617   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.676776   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.677006   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.677030   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.677126   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.677277   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.677299   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.677336   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.677499   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.677511   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.677635   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.677768   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.678072   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.678224   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.787049   61500 ssh_runner.go:195] Run: systemctl --version
	I0416 00:59:40.793568   61500 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:59:40.941445   61500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 00:59:40.949062   61500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:59:40.949177   61500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:59:40.966425   61500 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 00:59:40.966454   61500 start.go:494] detecting cgroup driver to use...
	I0416 00:59:40.966525   61500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:59:40.985126   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:59:40.999931   61500 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:59:41.000004   61500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:59:41.015597   61500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:59:41.030610   61500 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:59:41.151240   61500 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 00:59:41.312384   61500 docker.go:233] disabling docker service ...
	I0416 00:59:41.312464   61500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 00:59:41.329263   61500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 00:59:41.345192   61500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 00:59:41.463330   61500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 00:59:41.595259   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 00:59:41.610495   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 00:59:41.632527   61500 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 00:59:41.632580   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.644625   61500 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 00:59:41.644723   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.656056   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.667069   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.682783   61500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 00:59:41.694760   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.712505   61500 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.737338   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.747518   61500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 00:59:41.756586   61500 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 00:59:41.756656   61500 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 00:59:41.769230   61500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 00:59:41.778424   61500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:59:41.894135   61500 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 00:59:42.039732   61500 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 00:59:42.039812   61500 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 00:59:42.044505   61500 start.go:562] Will wait 60s for crictl version
	I0416 00:59:42.044551   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.049632   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 00:59:42.106886   61500 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 00:59:42.106981   61500 ssh_runner.go:195] Run: crio --version
	I0416 00:59:42.137092   61500 ssh_runner.go:195] Run: crio --version
	I0416 00:59:42.170036   61500 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0416 00:59:42.171395   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:42.174790   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:42.175217   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:42.175250   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:42.175506   61500 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 00:59:42.180987   61500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:59:42.198472   61500 kubeadm.go:877] updating cluster {Name:no-preload-572602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.2 ClusterName:no-preload-572602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 00:59:42.198595   61500 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0416 00:59:42.198639   61500 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:59:42.236057   61500 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.2". assuming images are not preloaded.
	I0416 00:59:42.236084   61500 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.2 registry.k8s.io/kube-controller-manager:v1.30.0-rc.2 registry.k8s.io/kube-scheduler:v1.30.0-rc.2 registry.k8s.io/kube-proxy:v1.30.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 00:59:42.236146   61500 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:42.236166   61500 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.236180   61500 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.236182   61500 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.236212   61500 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.236238   61500 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0416 00:59:42.236287   61500 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.236164   61500 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.237740   61500 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.237756   61500 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0416 00:59:42.237763   61500 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.237779   61500 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.237740   61500 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.237848   61500 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.237847   61500 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:42.238087   61500 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.410682   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.445824   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.446874   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0416 00:59:42.448854   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.449450   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.452121   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.458966   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.480556   61500 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.2" does not exist at hash "461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6" in container runtime
	I0416 00:59:42.480608   61500 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.480670   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.176660   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.177053   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.177084   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.177031   63071 retry.go:31] will retry after 268.951362ms: waiting for machine to come up
	I0416 00:59:42.447724   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.448132   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.448159   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.448097   63071 retry.go:31] will retry after 293.793417ms: waiting for machine to come up
	I0416 00:59:42.743375   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.743845   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.743874   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.743801   63071 retry.go:31] will retry after 494.163372ms: waiting for machine to come up
	I0416 00:59:43.239314   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:43.239761   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:43.239790   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:43.239708   63071 retry.go:31] will retry after 698.851999ms: waiting for machine to come up
	I0416 00:59:43.939998   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:43.940577   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:43.940607   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:43.940535   63071 retry.go:31] will retry after 764.693004ms: waiting for machine to come up
	I0416 00:59:44.706335   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:44.706673   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:44.706724   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:44.706626   63071 retry.go:31] will retry after 874.082115ms: waiting for machine to come up
	I0416 00:59:45.581896   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:45.582331   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:45.582361   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:45.582280   63071 retry.go:31] will retry after 966.259345ms: waiting for machine to come up
	I0416 00:59:46.550671   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:46.551111   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:46.551140   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:46.551062   63071 retry.go:31] will retry after 1.191034468s: waiting for machine to come up
	I0416 00:59:42.583284   61500 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0416 00:59:42.583332   61500 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.583377   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.724785   61500 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2" does not exist at hash "ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b" in container runtime
	I0416 00:59:42.724827   61500 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.724878   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.724899   61500 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0416 00:59:42.724938   61500 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.724938   61500 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.2" does not exist at hash "35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e" in container runtime
	I0416 00:59:42.724964   61500 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.724979   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.724993   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.725019   61500 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.2" does not exist at hash "65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1" in container runtime
	I0416 00:59:42.725051   61500 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.725063   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.725088   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.725102   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.739346   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.739764   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.787888   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.787977   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.788024   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.788084   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.815167   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0416 00:59:42.815274   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0416 00:59:42.845627   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0416 00:59:42.845741   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0416 00:59:42.848065   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:42.848134   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:42.880543   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:42.880557   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2 (exists)
	I0416 00:59:42.880575   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.880628   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.880648   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:42.907207   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0416 00:59:42.907245   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0416 00:59:42.907269   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.2
	I0416 00:59:42.907295   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2 (exists)
	I0416 00:59:42.907334   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2 (exists)
	I0416 00:59:42.907350   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 00:59:43.138705   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:44.951278   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2: (2.07061835s)
	I0416 00:59:44.951295   61500 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2: (2.04392036s)
	I0416 00:59:44.951348   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2 (exists)
	I0416 00:59:44.951309   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2 from cache
	I0416 00:59:44.951364   61500 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.812619758s)
	I0416 00:59:44.951410   61500 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0416 00:59:44.951448   61500 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:44.951374   61500 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0416 00:59:44.951506   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:44.951508   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0416 00:59:47.744187   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:47.744683   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:47.744712   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:47.744637   63071 retry.go:31] will retry after 2.263605663s: waiting for machine to come up
	I0416 00:59:50.011136   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:50.011605   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:50.011632   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:50.011566   63071 retry.go:31] will retry after 2.648982849s: waiting for machine to come up
	I0416 00:59:48.656623   61500 ssh_runner.go:235] Completed: which crictl: (3.705085257s)
	I0416 00:59:48.656705   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:48.656715   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.705109475s)
	I0416 00:59:48.656743   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0416 00:59:48.656769   61500 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0416 00:59:48.656798   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0416 00:59:50.560030   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.903209359s)
	I0416 00:59:50.560071   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0416 00:59:50.560085   61500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.90335887s)
	I0416 00:59:50.560096   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:50.560148   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:50.560151   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0416 00:59:50.560309   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0416 00:59:52.662443   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:52.662852   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:52.662883   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:52.662815   63071 retry.go:31] will retry after 2.183508059s: waiting for machine to come up
	I0416 00:59:54.849225   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:54.849701   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:54.849734   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:54.849649   63071 retry.go:31] will retry after 3.201585234s: waiting for machine to come up
	I0416 00:59:52.739620   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2: (2.179436189s)
	I0416 00:59:52.739658   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2 from cache
	I0416 00:59:52.739688   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:52.739697   61500 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.179365348s)
	I0416 00:59:52.739724   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0416 00:59:52.739747   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:55.098350   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2: (2.358579586s)
	I0416 00:59:55.098381   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2 from cache
	I0416 00:59:55.098408   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 00:59:55.098454   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 00:59:57.166586   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2: (2.068105529s)
	I0416 00:59:57.166615   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.2 from cache
	I0416 00:59:57.166644   61500 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0416 00:59:57.166697   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0416 00:59:59.394339   62747 start.go:364] duration metric: took 1m16.499681915s to acquireMachinesLock for "embed-certs-617092"
	I0416 00:59:59.394389   62747 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:59:59.394412   62747 fix.go:54] fixHost starting: 
	I0416 00:59:59.394834   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:59:59.394896   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:59:59.414712   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38637
	I0416 00:59:59.415464   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:59:59.416123   62747 main.go:141] libmachine: Using API Version  1
	I0416 00:59:59.416150   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:59:59.416436   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:59:59.416623   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 00:59:59.416786   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 00:59:59.418413   62747 fix.go:112] recreateIfNeeded on embed-certs-617092: state=Stopped err=<nil>
	I0416 00:59:59.418449   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	W0416 00:59:59.418609   62747 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:59:59.420560   62747 out.go:177] * Restarting existing kvm2 VM for "embed-certs-617092" ...
	I0416 00:59:58.052613   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.053048   62139 main.go:141] libmachine: (old-k8s-version-800769) Found IP for machine: 192.168.83.98
	I0416 00:59:58.053073   62139 main.go:141] libmachine: (old-k8s-version-800769) Reserving static IP address...
	I0416 00:59:58.053089   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has current primary IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.053517   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "old-k8s-version-800769", mac: "52:54:00:a1:ad:da", ip: "192.168.83.98"} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.053549   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | skip adding static IP to network mk-old-k8s-version-800769 - found existing host DHCP lease matching {name: "old-k8s-version-800769", mac: "52:54:00:a1:ad:da", ip: "192.168.83.98"}
	I0416 00:59:58.053569   62139 main.go:141] libmachine: (old-k8s-version-800769) Reserved static IP address: 192.168.83.98
	I0416 00:59:58.053587   62139 main.go:141] libmachine: (old-k8s-version-800769) Waiting for SSH to be available...
	I0416 00:59:58.053602   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Getting to WaitForSSH function...
	I0416 00:59:58.055598   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.055907   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.055941   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.056038   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Using SSH client type: external
	I0416 00:59:58.056088   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa (-rw-------)
	I0416 00:59:58.056132   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.98 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 00:59:58.056149   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | About to run SSH command:
	I0416 00:59:58.056162   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | exit 0
	I0416 00:59:58.185675   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | SSH cmd err, output: <nil>: 
	I0416 00:59:58.186055   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetConfigRaw
	I0416 00:59:58.186802   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:58.189772   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.190219   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.190257   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.190448   62139 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/config.json ...
	I0416 00:59:58.190666   62139 machine.go:94] provisionDockerMachine start ...
	I0416 00:59:58.190685   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:58.190902   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.193570   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.193954   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.193982   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.194139   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.194337   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.194492   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.194636   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.194786   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.195041   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.195056   62139 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:59:58.321824   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 00:59:58.321857   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.322146   62139 buildroot.go:166] provisioning hostname "old-k8s-version-800769"
	I0416 00:59:58.322175   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.322381   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.324941   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.325288   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.325316   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.325423   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.325613   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.325776   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.325936   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.326109   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.326322   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.326339   62139 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-800769 && echo "old-k8s-version-800769" | sudo tee /etc/hostname
	I0416 00:59:58.455194   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-800769
	
	I0416 00:59:58.455236   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.458021   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.458423   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.458458   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.458662   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.458848   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.459013   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.459162   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.459353   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.459507   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.459524   62139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-800769' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-800769/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-800769' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:59:58.587318   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:59:58.587351   62139 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:59:58.587391   62139 buildroot.go:174] setting up certificates
	I0416 00:59:58.587400   62139 provision.go:84] configureAuth start
	I0416 00:59:58.587413   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.587686   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:58.590415   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.590739   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.590778   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.590880   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.593282   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.593728   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.593759   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.593931   62139 provision.go:143] copyHostCerts
	I0416 00:59:58.593988   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:59:58.594007   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:59:58.594079   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:59:58.594213   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:59:58.594222   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:59:58.594263   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:59:58.594372   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:59:58.594383   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:59:58.594408   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:59:58.594470   62139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-800769 san=[127.0.0.1 192.168.83.98 localhost minikube old-k8s-version-800769]
	I0416 00:59:58.692127   62139 provision.go:177] copyRemoteCerts
	I0416 00:59:58.692197   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:59:58.692232   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.694858   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.695231   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.695278   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.695507   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.695693   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.695852   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.695994   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:58.783458   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:59:58.811124   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0416 00:59:58.836495   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 00:59:58.862044   62139 provision.go:87] duration metric: took 274.632117ms to configureAuth
	I0416 00:59:58.862068   62139 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:59:58.862278   62139 config.go:182] Loaded profile config "old-k8s-version-800769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0416 00:59:58.862361   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.865352   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.865795   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.865829   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.866043   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.866228   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.866435   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.866625   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.866805   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.867008   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.867026   62139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:59:59.143874   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:59:59.143900   62139 machine.go:97] duration metric: took 953.218972ms to provisionDockerMachine
	I0416 00:59:59.143914   62139 start.go:293] postStartSetup for "old-k8s-version-800769" (driver="kvm2")
	I0416 00:59:59.143927   62139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:59:59.143972   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.144277   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:59:59.144302   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.147021   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.147355   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.147385   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.147649   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.147871   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.148036   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.148174   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.236981   62139 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:59:59.241388   62139 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:59:59.241411   62139 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:59:59.241469   62139 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:59:59.241534   62139 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:59:59.241619   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:59:59.251688   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:59:59.275189   62139 start.go:296] duration metric: took 131.262042ms for postStartSetup
	I0416 00:59:59.275227   62139 fix.go:56] duration metric: took 18.605201288s for fixHost
	I0416 00:59:59.275250   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.277804   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.278153   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.278186   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.278341   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.278581   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.278741   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.278908   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.279068   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:59.279233   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:59.279243   62139 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 00:59:59.394108   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229199.360202150
	
	I0416 00:59:59.394141   62139 fix.go:216] guest clock: 1713229199.360202150
	I0416 00:59:59.394152   62139 fix.go:229] Guest: 2024-04-16 00:59:59.36020215 +0000 UTC Remote: 2024-04-16 00:59:59.27523174 +0000 UTC m=+217.222314955 (delta=84.97041ms)
	I0416 00:59:59.394211   62139 fix.go:200] guest clock delta is within tolerance: 84.97041ms
	I0416 00:59:59.394218   62139 start.go:83] releasing machines lock for "old-k8s-version-800769", held for 18.724230851s
	I0416 00:59:59.394252   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.394554   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:59.397241   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.397670   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.397703   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.397897   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398460   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398650   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398740   62139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:59:59.398782   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.399049   62139 ssh_runner.go:195] Run: cat /version.json
	I0416 00:59:59.399072   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.401397   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401656   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401802   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.401825   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401964   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.402017   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.402089   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.402173   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.402248   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.402320   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.402376   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.402430   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.402577   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.402638   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.481834   62139 ssh_runner.go:195] Run: systemctl --version
	I0416 00:59:59.516372   62139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:59:59.666722   62139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 00:59:59.674165   62139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:59:59.674226   62139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:59:59.695545   62139 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 00:59:59.695573   62139 start.go:494] detecting cgroup driver to use...
	I0416 00:59:59.695646   62139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:59:59.715091   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:59:59.732004   62139 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:59:59.732060   62139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:59:59.753217   62139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:59:59.768513   62139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:59:59.898693   62139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:00:00.066535   62139 docker.go:233] disabling docker service ...
	I0416 01:00:00.066607   62139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:00:00.084512   62139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:00:00.097714   62139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:00:00.232901   62139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:00:00.378379   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:00:00.395191   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:00:00.416631   62139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0416 01:00:00.416695   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.428712   62139 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:00:00.428774   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.442687   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.454631   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.466151   62139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:00:00.478459   62139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:00:00.489957   62139 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:00:00.490035   62139 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:00:00.506087   62139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:00:00.518100   62139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:00.676317   62139 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:00:00.869766   62139 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:00:00.869855   62139 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:00:00.875363   62139 start.go:562] Will wait 60s for crictl version
	I0416 01:00:00.875424   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:00.880947   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:00:00.924780   62139 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:00:00.924852   62139 ssh_runner.go:195] Run: crio --version
	I0416 01:00:00.958390   62139 ssh_runner.go:195] Run: crio --version
	I0416 01:00:00.993114   62139 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0416 01:00:00.994513   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 01:00:00.997571   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 01:00:00.998032   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 01:00:00.998065   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 01:00:00.998273   62139 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0416 01:00:01.002750   62139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:01.015709   62139 kubeadm.go:877] updating cluster {Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.98 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:00:01.015810   62139 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 01:00:01.015853   62139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:01.063257   62139 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 01:00:01.063331   62139 ssh_runner.go:195] Run: which lz4
	I0416 01:00:01.067973   62139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:00:01.072369   62139 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:00:01.072400   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0416 00:59:57.817013   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0416 00:59:57.817060   61500 cache_images.go:123] Successfully loaded all cached images
	I0416 00:59:57.817073   61500 cache_images.go:92] duration metric: took 15.580967615s to LoadCachedImages
	I0416 00:59:57.817087   61500 kubeadm.go:928] updating node { 192.168.39.121 8443 v1.30.0-rc.2 crio true true} ...
	I0416 00:59:57.817241   61500 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-572602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:no-preload-572602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 00:59:57.817324   61500 ssh_runner.go:195] Run: crio config
	I0416 00:59:57.866116   61500 cni.go:84] Creating CNI manager for ""
	I0416 00:59:57.866140   61500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:59:57.866154   61500 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 00:59:57.866189   61500 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.121 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-572602 NodeName:no-preload-572602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.121 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 00:59:57.866325   61500 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.121
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-572602"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.121
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.121"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 00:59:57.866390   61500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0416 00:59:57.876619   61500 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 00:59:57.876689   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 00:59:57.886472   61500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0416 00:59:57.903172   61500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0416 00:59:57.919531   61500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0416 00:59:57.936394   61500 ssh_runner.go:195] Run: grep 192.168.39.121	control-plane.minikube.internal$ /etc/hosts
	I0416 00:59:57.940161   61500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.121	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:59:57.951997   61500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:59:58.089553   61500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 00:59:58.117870   61500 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602 for IP: 192.168.39.121
	I0416 00:59:58.117926   61500 certs.go:194] generating shared ca certs ...
	I0416 00:59:58.117949   61500 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:59:58.118136   61500 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 00:59:58.118199   61500 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 00:59:58.118216   61500 certs.go:256] generating profile certs ...
	I0416 00:59:58.118351   61500 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/client.key
	I0416 00:59:58.118446   61500 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/apiserver.key.a3b1330f
	I0416 00:59:58.118505   61500 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/proxy-client.key
	I0416 00:59:58.118664   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 00:59:58.118708   61500 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 00:59:58.118721   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 00:59:58.118756   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 00:59:58.118786   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 00:59:58.118814   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 00:59:58.118874   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:59:58.119738   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 00:59:58.150797   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 00:59:58.181693   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 00:59:58.231332   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 00:59:58.276528   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 00:59:58.301000   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 00:59:58.326090   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 00:59:58.350254   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 00:59:58.377597   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 00:59:58.401548   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 00:59:58.425237   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 00:59:58.449748   61500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 00:59:58.468346   61500 ssh_runner.go:195] Run: openssl version
	I0416 00:59:58.474164   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 00:59:58.485674   61500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 00:59:58.490136   61500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 00:59:58.490203   61500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 00:59:58.495781   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 00:59:58.507047   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 00:59:58.518007   61500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 00:59:58.522317   61500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 00:59:58.522364   61500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 00:59:58.527809   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 00:59:58.538579   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 00:59:58.549188   61500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:59:58.553688   61500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:59:58.553732   61500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:59:58.559175   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 00:59:58.570142   61500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 00:59:58.574657   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 00:59:58.580560   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 00:59:58.586319   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 00:59:58.593938   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 00:59:58.599808   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 00:59:58.605583   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 00:59:58.611301   61500 kubeadm.go:391] StartCluster: {Name:no-preload-572602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.2 ClusterName:no-preload-572602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:59:58.611385   61500 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 00:59:58.611439   61500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:59:58.655244   61500 cri.go:89] found id: ""
	I0416 00:59:58.655315   61500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 00:59:58.667067   61500 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 00:59:58.667082   61500 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 00:59:58.667088   61500 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 00:59:58.667128   61500 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 00:59:58.678615   61500 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:59:58.680097   61500 kubeconfig.go:125] found "no-preload-572602" server: "https://192.168.39.121:8443"
	I0416 00:59:58.683135   61500 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 00:59:58.695291   61500 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.121
	I0416 00:59:58.695323   61500 kubeadm.go:1154] stopping kube-system containers ...
	I0416 00:59:58.695337   61500 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 00:59:58.695380   61500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:59:58.731743   61500 cri.go:89] found id: ""
	I0416 00:59:58.731832   61500 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 00:59:58.748125   61500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 00:59:58.757845   61500 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 00:59:58.757865   61500 kubeadm.go:156] found existing configuration files:
	
	I0416 00:59:58.757918   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 00:59:58.766993   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 00:59:58.767036   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 00:59:58.776831   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 00:59:58.786420   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 00:59:58.786467   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 00:59:58.796067   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 00:59:58.805385   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 00:59:58.805511   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 00:59:58.815313   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 00:59:58.826551   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 00:59:58.826603   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 00:59:58.836652   61500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 00:59:58.848671   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 00:59:58.967511   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.416009   61500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.44846758s)
	I0416 01:00:00.416041   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.657784   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.741694   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.876550   61500 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:00.876630   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:01.377586   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:01.877647   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:01.950167   61500 api_server.go:72] duration metric: took 1.073614574s to wait for apiserver process to appear ...
	I0416 01:00:01.950201   61500 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:00:01.950224   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:01.950854   61500 api_server.go:269] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
	I0416 01:00:02.450437   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 00:59:59.421878   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Start
	I0416 00:59:59.422036   62747 main.go:141] libmachine: (embed-certs-617092) Ensuring networks are active...
	I0416 00:59:59.422646   62747 main.go:141] libmachine: (embed-certs-617092) Ensuring network default is active
	I0416 00:59:59.422931   62747 main.go:141] libmachine: (embed-certs-617092) Ensuring network mk-embed-certs-617092 is active
	I0416 00:59:59.423360   62747 main.go:141] libmachine: (embed-certs-617092) Getting domain xml...
	I0416 00:59:59.424005   62747 main.go:141] libmachine: (embed-certs-617092) Creating domain...
	I0416 01:00:00.682582   62747 main.go:141] libmachine: (embed-certs-617092) Waiting to get IP...
	I0416 01:00:00.683684   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:00.684222   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:00.684277   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:00.684198   63257 retry.go:31] will retry after 196.582767ms: waiting for machine to come up
	I0416 01:00:00.882954   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:00.883544   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:00.883577   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:00.883482   63257 retry.go:31] will retry after 309.274692ms: waiting for machine to come up
	I0416 01:00:01.193848   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:01.194286   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:01.194325   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:01.194234   63257 retry.go:31] will retry after 379.332728ms: waiting for machine to come up
	I0416 01:00:01.574938   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:01.575371   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:01.575400   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:01.575318   63257 retry.go:31] will retry after 445.10423ms: waiting for machine to come up
	I0416 01:00:02.022081   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:02.022612   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:02.022636   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:02.022570   63257 retry.go:31] will retry after 692.025501ms: waiting for machine to come up
	I0416 01:00:02.716548   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:02.717032   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:02.717061   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:02.716992   63257 retry.go:31] will retry after 735.44304ms: waiting for machine to come up
	I0416 01:00:02.891638   62139 crio.go:462] duration metric: took 1.823700483s to copy over tarball
	I0416 01:00:02.891723   62139 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:00:06.137253   62139 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.245498092s)
	I0416 01:00:06.137283   62139 crio.go:469] duration metric: took 3.245614896s to extract the tarball
	I0416 01:00:06.137292   62139 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:00:06.181260   62139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:06.224646   62139 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 01:00:06.224682   62139 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 01:00:06.224762   62139 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:06.224815   62139 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.224851   62139 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.224821   62139 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.224768   62139 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.224797   62139 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.225121   62139 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0416 01:00:06.224797   62139 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.226485   62139 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.226505   62139 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0416 01:00:06.226516   62139 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.226580   62139 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.226729   62139 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.227296   62139 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:06.227311   62139 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.227315   62139 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.397101   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.431142   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0416 01:00:06.433152   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.433876   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.434844   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.441478   62139 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0416 01:00:06.441524   62139 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.441558   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.450391   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.506375   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.540080   62139 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0416 01:00:06.540250   62139 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0416 01:00:06.540121   62139 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0416 01:00:06.540299   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.540305   62139 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.540343   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613287   62139 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0416 01:00:06.613305   62139 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0416 01:00:06.613334   62139 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.613339   62139 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.613381   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613381   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613490   62139 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0416 01:00:06.613522   62139 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.613569   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613384   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.613620   62139 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0416 01:00:06.613657   62139 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.613716   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0416 01:00:06.613722   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613665   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.619153   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.638065   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.734018   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0416 01:00:06.734134   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.749273   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0416 01:00:06.750536   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0416 01:00:06.750576   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.750655   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0416 01:00:06.750594   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0416 01:00:06.790321   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0416 01:00:06.803564   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0416 01:00:07.060494   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:05.541219   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:05.541261   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:05.541279   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:05.585252   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:05.585284   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:05.950871   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:05.970682   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:05.970725   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:06.450780   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:06.457855   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:06.457888   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:06.950519   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:06.955476   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:06.955505   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:07.451155   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:07.463138   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:07.463172   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:03.453566   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:03.454098   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:03.454131   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:03.454033   63257 retry.go:31] will retry after 838.732671ms: waiting for machine to come up
	I0416 01:00:04.294692   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:04.295209   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:04.295237   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:04.295158   63257 retry.go:31] will retry after 1.302969512s: waiting for machine to come up
	I0416 01:00:05.599886   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:05.600406   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:05.600435   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:05.600378   63257 retry.go:31] will retry after 1.199501225s: waiting for machine to come up
	I0416 01:00:06.801741   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:06.802134   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:06.802153   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:06.802107   63257 retry.go:31] will retry after 1.631018672s: waiting for machine to come up
	I0416 01:00:07.951263   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:07.961911   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:07.961946   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:08.450413   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:08.458651   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:08.458683   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:08.950297   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:08.955847   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 200:
	ok
	I0416 01:00:08.964393   61500 api_server.go:141] control plane version: v1.30.0-rc.2
	I0416 01:00:08.964422   61500 api_server.go:131] duration metric: took 7.01421218s to wait for apiserver health ...
	I0416 01:00:08.964432   61500 cni.go:84] Creating CNI manager for ""
	I0416 01:00:08.964445   61500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:08.966249   61500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:00:07.207951   62139 cache_images.go:92] duration metric: took 983.249797ms to LoadCachedImages
	W0416 01:00:07.286619   62139 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0416 01:00:07.286654   62139 kubeadm.go:928] updating node { 192.168.83.98 8443 v1.20.0 crio true true} ...
	I0416 01:00:07.286815   62139 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-800769 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.98
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 01:00:07.286916   62139 ssh_runner.go:195] Run: crio config
	I0416 01:00:07.338016   62139 cni.go:84] Creating CNI manager for ""
	I0416 01:00:07.338038   62139 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:07.338049   62139 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:00:07.338072   62139 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.98 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-800769 NodeName:old-k8s-version-800769 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.98"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.98 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0416 01:00:07.338207   62139 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.98
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-800769"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.98
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.98"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:00:07.338273   62139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0416 01:00:07.349347   62139 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:00:07.349432   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:00:07.361389   62139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0416 01:00:07.379714   62139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:00:07.397953   62139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0416 01:00:07.416901   62139 ssh_runner.go:195] Run: grep 192.168.83.98	control-plane.minikube.internal$ /etc/hosts
	I0416 01:00:07.420904   62139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.98	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:07.436685   62139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:07.567945   62139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:00:07.587829   62139 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769 for IP: 192.168.83.98
	I0416 01:00:07.587858   62139 certs.go:194] generating shared ca certs ...
	I0416 01:00:07.587880   62139 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:07.588087   62139 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:00:07.588155   62139 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:00:07.588171   62139 certs.go:256] generating profile certs ...
	I0416 01:00:07.606683   62139 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.key
	I0416 01:00:07.606823   62139 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key.efc35655
	I0416 01:00:07.606872   62139 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.key
	I0416 01:00:07.607040   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:00:07.607087   62139 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:00:07.607114   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:00:07.607172   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:00:07.607204   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:00:07.607234   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:00:07.607283   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:07.608127   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:00:07.658868   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:00:07.703378   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:00:07.743203   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:00:07.787335   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0416 01:00:07.823630   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 01:00:07.854198   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:00:07.881813   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 01:00:07.909698   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:00:07.935341   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:00:07.963102   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:00:07.989657   62139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:00:08.009203   62139 ssh_runner.go:195] Run: openssl version
	I0416 01:00:08.015677   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:00:08.027077   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.032096   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.032179   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.038672   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:00:08.054256   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:00:08.065287   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.069846   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.069907   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.075899   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:00:08.087272   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:00:08.098494   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.103168   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.103246   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.109202   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:00:08.120143   62139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:00:08.125027   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 01:00:08.131716   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 01:00:08.138024   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 01:00:08.144291   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 01:00:08.150741   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 01:00:08.156931   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 01:00:08.163147   62139 kubeadm.go:391] StartCluster: {Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.98 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:00:08.163254   62139 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:00:08.163298   62139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:08.201923   62139 cri.go:89] found id: ""
	I0416 01:00:08.202000   62139 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 01:00:08.212441   62139 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 01:00:08.212462   62139 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 01:00:08.212467   62139 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 01:00:08.212514   62139 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 01:00:08.222702   62139 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 01:00:08.223670   62139 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-800769" does not appear in /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:00:08.224332   62139 kubeconfig.go:62] /home/jenkins/minikube-integration/18647-7542/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-800769" cluster setting kubeconfig missing "old-k8s-version-800769" context setting]
	I0416 01:00:08.225340   62139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:08.343775   62139 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 01:00:08.355942   62139 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.98
	I0416 01:00:08.355986   62139 kubeadm.go:1154] stopping kube-system containers ...
	I0416 01:00:08.356007   62139 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 01:00:08.356081   62139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:08.398894   62139 cri.go:89] found id: ""
	I0416 01:00:08.398976   62139 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 01:00:08.416343   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:00:08.426901   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:00:08.426926   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:00:08.426981   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:00:08.437870   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:00:08.437942   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:00:08.452256   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:00:08.466375   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:00:08.466447   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:00:08.477246   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:00:08.487547   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:00:08.487615   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:00:08.504171   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:00:08.515265   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:00:08.515332   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:00:08.525186   62139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:00:08.535381   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:08.657456   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.504421   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.781478   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.950913   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:10.044772   62139 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:10.044871   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:10.545002   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:11.045664   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:11.545083   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:12.045593   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:08.967643   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:00:08.986743   61500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:00:09.011229   61500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:00:09.022810   61500 system_pods.go:59] 8 kube-system pods found
	I0416 01:00:09.022858   61500 system_pods.go:61] "coredns-7db6d8ff4d-xxlkb" [b1ec79ef-e16c-4feb-94ec-5dc85645867f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:00:09.022869   61500 system_pods.go:61] "etcd-no-preload-572602" [f29f3efe-bee4-4d8c-9d49-68008ad50a9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 01:00:09.022881   61500 system_pods.go:61] "kube-apiserver-no-preload-572602" [dd740f94-bfd5-4043-9522-5b8a932690cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 01:00:09.022893   61500 system_pods.go:61] "kube-controller-manager-no-preload-572602" [2778e1a7-a7e3-4ad6-a265-552e78b6b195] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 01:00:09.022901   61500 system_pods.go:61] "kube-proxy-v9fmp" [70ab6236-c758-48eb-85a7-8f7721730a20] Running
	I0416 01:00:09.022908   61500 system_pods.go:61] "kube-scheduler-no-preload-572602" [bb8650bb-657e-49f1-9cee-4437879be44d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 01:00:09.022919   61500 system_pods.go:61] "metrics-server-569cc877fc-llsfr" [ad421803-6236-44df-a15d-c890a3a10dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:00:09.022925   61500 system_pods.go:61] "storage-provisioner" [ec2dd6e2-33db-4888-8945-9879821c92fc] Running
	I0416 01:00:09.022934   61500 system_pods.go:74] duration metric: took 11.661356ms to wait for pod list to return data ...
	I0416 01:00:09.022950   61500 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:00:09.027411   61500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:00:09.027445   61500 node_conditions.go:123] node cpu capacity is 2
	I0416 01:00:09.027459   61500 node_conditions.go:105] duration metric: took 4.503043ms to run NodePressure ...
	I0416 01:00:09.027480   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.307796   61500 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 01:00:09.313534   61500 kubeadm.go:733] kubelet initialised
	I0416 01:00:09.313567   61500 kubeadm.go:734] duration metric: took 5.734401ms waiting for restarted kubelet to initialise ...
	I0416 01:00:09.313580   61500 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:00:09.320900   61500 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.327569   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.327606   61500 pod_ready.go:81] duration metric: took 6.67541ms for pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.327621   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.327633   61500 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.333714   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "etcd-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.333746   61500 pod_ready.go:81] duration metric: took 6.094825ms for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.333759   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "etcd-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.333768   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.338980   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "kube-apiserver-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.339006   61500 pod_ready.go:81] duration metric: took 5.230122ms for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.339017   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "kube-apiserver-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.339033   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.415418   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.415450   61500 pod_ready.go:81] duration metric: took 76.40508ms for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.415462   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.415470   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v9fmp" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.815907   61500 pod_ready.go:92] pod "kube-proxy-v9fmp" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:09.815945   61500 pod_ready.go:81] duration metric: took 400.462786ms for pod "kube-proxy-v9fmp" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.815959   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:11.824269   61500 pod_ready.go:102] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:08.434523   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:08.435039   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:08.435067   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:08.434988   63257 retry.go:31] will retry after 2.819136125s: waiting for machine to come up
	I0416 01:00:11.256238   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:11.256704   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:11.256722   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:11.256664   63257 retry.go:31] will retry after 3.074881299s: waiting for machine to come up
	I0416 01:00:12.545696   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:13.045935   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:13.545810   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.045682   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.545524   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:15.045110   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:15.545792   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:16.045843   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:16.545684   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:17.045401   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.322436   61500 pod_ready.go:102] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:16.821648   61500 pod_ready.go:102] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:14.335004   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:14.335391   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:14.335437   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:14.335343   63257 retry.go:31] will retry after 4.248377683s: waiting for machine to come up
	I0416 01:00:20.014452   61267 start.go:364] duration metric: took 53.932663013s to acquireMachinesLock for "default-k8s-diff-port-653942"
	I0416 01:00:20.014507   61267 start.go:96] Skipping create...Using existing machine configuration
	I0416 01:00:20.014515   61267 fix.go:54] fixHost starting: 
	I0416 01:00:20.014929   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:00:20.014964   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:00:20.033099   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0416 01:00:20.033554   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:00:20.034077   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:00:20.034104   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:00:20.034458   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:00:20.034665   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:20.034812   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:00:20.036559   61267 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653942: state=Stopped err=<nil>
	I0416 01:00:20.036588   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	W0416 01:00:20.036751   61267 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 01:00:20.038774   61267 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-653942" ...
	I0416 01:00:18.588875   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.589320   62747 main.go:141] libmachine: (embed-certs-617092) Found IP for machine: 192.168.61.225
	I0416 01:00:18.589347   62747 main.go:141] libmachine: (embed-certs-617092) Reserving static IP address...
	I0416 01:00:18.589362   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has current primary IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.589699   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "embed-certs-617092", mac: "52:54:00:86:1b:62", ip: "192.168.61.225"} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.589728   62747 main.go:141] libmachine: (embed-certs-617092) Reserved static IP address: 192.168.61.225
	I0416 01:00:18.589752   62747 main.go:141] libmachine: (embed-certs-617092) DBG | skip adding static IP to network mk-embed-certs-617092 - found existing host DHCP lease matching {name: "embed-certs-617092", mac: "52:54:00:86:1b:62", ip: "192.168.61.225"}
	I0416 01:00:18.589771   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Getting to WaitForSSH function...
	I0416 01:00:18.589808   62747 main.go:141] libmachine: (embed-certs-617092) Waiting for SSH to be available...
	I0416 01:00:18.591590   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.591858   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.591885   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.591995   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Using SSH client type: external
	I0416 01:00:18.592027   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa (-rw-------)
	I0416 01:00:18.592058   62747 main.go:141] libmachine: (embed-certs-617092) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 01:00:18.592072   62747 main.go:141] libmachine: (embed-certs-617092) DBG | About to run SSH command:
	I0416 01:00:18.592084   62747 main.go:141] libmachine: (embed-certs-617092) DBG | exit 0
	I0416 01:00:18.717336   62747 main.go:141] libmachine: (embed-certs-617092) DBG | SSH cmd err, output: <nil>: 
	I0416 01:00:18.717759   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetConfigRaw
	I0416 01:00:18.718347   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:18.720640   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.721040   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.721086   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.721300   62747 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/config.json ...
	I0416 01:00:18.721481   62747 machine.go:94] provisionDockerMachine start ...
	I0416 01:00:18.721501   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:18.721700   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:18.723610   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.723924   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.723946   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.724126   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:18.724345   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.724512   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.724616   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:18.724737   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:18.725049   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:18.725199   62747 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 01:00:18.834014   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 01:00:18.834041   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetMachineName
	I0416 01:00:18.834257   62747 buildroot.go:166] provisioning hostname "embed-certs-617092"
	I0416 01:00:18.834280   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetMachineName
	I0416 01:00:18.834495   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:18.836959   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.837282   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.837333   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.837417   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:18.837588   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.837755   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.837962   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:18.838152   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:18.838324   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:18.838342   62747 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-617092 && echo "embed-certs-617092" | sudo tee /etc/hostname
	I0416 01:00:18.959828   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-617092
	
	I0416 01:00:18.959865   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:18.962661   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.962997   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.963029   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.963174   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:18.963351   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.963488   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.963609   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:18.963747   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:18.963949   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:18.963967   62747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-617092' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-617092/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-617092' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 01:00:19.079309   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 01:00:19.079341   62747 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 01:00:19.079400   62747 buildroot.go:174] setting up certificates
	I0416 01:00:19.079409   62747 provision.go:84] configureAuth start
	I0416 01:00:19.079423   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetMachineName
	I0416 01:00:19.079723   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:19.082430   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.082809   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.082838   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.082994   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.085476   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.085802   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.085825   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.085952   62747 provision.go:143] copyHostCerts
	I0416 01:00:19.086006   62747 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 01:00:19.086022   62747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 01:00:19.086077   62747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 01:00:19.086165   62747 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 01:00:19.086174   62747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 01:00:19.086193   62747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 01:00:19.086244   62747 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 01:00:19.086251   62747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 01:00:19.086270   62747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 01:00:19.086336   62747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.embed-certs-617092 san=[127.0.0.1 192.168.61.225 embed-certs-617092 localhost minikube]
	I0416 01:00:19.330622   62747 provision.go:177] copyRemoteCerts
	I0416 01:00:19.330687   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 01:00:19.330712   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.333264   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.333618   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.333645   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.333798   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.333979   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.334122   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.334235   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:19.415820   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0416 01:00:19.442985   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 01:00:19.468427   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 01:00:19.496640   62747 provision.go:87] duration metric: took 417.215523ms to configureAuth
	I0416 01:00:19.496676   62747 buildroot.go:189] setting minikube options for container-runtime
	I0416 01:00:19.496857   62747 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:00:19.496929   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.499561   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.499933   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.499981   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.500132   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.500352   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.500529   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.500671   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.500823   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:19.501026   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:19.501046   62747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 01:00:19.775400   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 01:00:19.775434   62747 machine.go:97] duration metric: took 1.053938445s to provisionDockerMachine
	I0416 01:00:19.775448   62747 start.go:293] postStartSetup for "embed-certs-617092" (driver="kvm2")
	I0416 01:00:19.775462   62747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 01:00:19.775484   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:19.775853   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 01:00:19.775886   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.778961   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.779327   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.779356   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.779510   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.779723   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.779883   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.780008   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:19.865236   62747 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 01:00:19.869769   62747 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 01:00:19.869800   62747 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 01:00:19.869865   62747 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 01:00:19.870010   62747 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 01:00:19.870111   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 01:00:19.880477   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:19.905555   62747 start.go:296] duration metric: took 130.091868ms for postStartSetup
	I0416 01:00:19.905603   62747 fix.go:56] duration metric: took 20.511199999s for fixHost
	I0416 01:00:19.905629   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.908252   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.908593   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.908631   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.908770   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.908972   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.909129   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.909284   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.909448   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:19.909607   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:19.909622   62747 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 01:00:20.014222   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229219.981820926
	
	I0416 01:00:20.014251   62747 fix.go:216] guest clock: 1713229219.981820926
	I0416 01:00:20.014262   62747 fix.go:229] Guest: 2024-04-16 01:00:19.981820926 +0000 UTC Remote: 2024-04-16 01:00:19.90560817 +0000 UTC m=+97.152894999 (delta=76.212756ms)
	I0416 01:00:20.014331   62747 fix.go:200] guest clock delta is within tolerance: 76.212756ms
	I0416 01:00:20.014339   62747 start.go:83] releasing machines lock for "embed-certs-617092", held for 20.619971021s
	I0416 01:00:20.014377   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.014676   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:20.017771   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.018204   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:20.018236   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.018446   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.018991   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.019172   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.019260   62747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 01:00:20.019299   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:20.019439   62747 ssh_runner.go:195] Run: cat /version.json
	I0416 01:00:20.019466   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:20.022283   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.022554   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.022664   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:20.022688   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.022897   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:20.023088   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:20.023150   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:20.023177   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.023281   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:20.023431   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:20.023431   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:20.023791   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:20.023942   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:20.024084   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:20.138251   62747 ssh_runner.go:195] Run: systemctl --version
	I0416 01:00:20.145100   62747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 01:00:20.299049   62747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 01:00:20.307080   62747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 01:00:20.307177   62747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 01:00:20.326056   62747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 01:00:20.326085   62747 start.go:494] detecting cgroup driver to use...
	I0416 01:00:20.326166   62747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 01:00:20.343297   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 01:00:20.358136   62747 docker.go:217] disabling cri-docker service (if available) ...
	I0416 01:00:20.358201   62747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 01:00:20.372936   62747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 01:00:20.387473   62747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 01:00:20.515721   62747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:00:20.680319   62747 docker.go:233] disabling docker service ...
	I0416 01:00:20.680413   62747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:00:20.700816   62747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:00:20.724097   62747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:00:20.885812   62747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:00:21.037890   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:00:21.055670   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:00:21.078466   62747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 01:00:21.078533   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.090135   62747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:00:21.090200   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.106122   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.123844   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.134923   62747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:00:21.153565   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.164751   62747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.184880   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.197711   62747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:00:21.208615   62747 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:00:21.208669   62747 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:00:21.223906   62747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:00:21.234873   62747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:21.405921   62747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:00:21.564833   62747 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:00:21.564918   62747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:00:21.570592   62747 start.go:562] Will wait 60s for crictl version
	I0416 01:00:21.570660   62747 ssh_runner.go:195] Run: which crictl
	I0416 01:00:21.575339   62747 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:00:21.617252   62747 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:00:21.617348   62747 ssh_runner.go:195] Run: crio --version
	I0416 01:00:21.648662   62747 ssh_runner.go:195] Run: crio --version
	I0416 01:00:21.683775   62747 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 01:00:17.544937   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:18.045282   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:18.545707   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:19.045821   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:19.545868   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.045069   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.545134   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:21.045607   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:21.545366   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:22.044998   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.040137   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Start
	I0416 01:00:20.040355   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Ensuring networks are active...
	I0416 01:00:20.041103   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Ensuring network default is active
	I0416 01:00:20.041469   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Ensuring network mk-default-k8s-diff-port-653942 is active
	I0416 01:00:20.041869   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Getting domain xml...
	I0416 01:00:20.042474   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Creating domain...
	I0416 01:00:21.359375   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting to get IP...
	I0416 01:00:21.360333   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.360736   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.360807   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:21.360726   63461 retry.go:31] will retry after 290.970715ms: waiting for machine to come up
	I0416 01:00:21.653420   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.653883   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.653916   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:21.653841   63461 retry.go:31] will retry after 361.304618ms: waiting for machine to come up
	I0416 01:00:22.016540   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.017038   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.017071   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:22.016976   63461 retry.go:31] will retry after 411.249327ms: waiting for machine to come up
	I0416 01:00:18.322778   61500 pod_ready.go:92] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:18.322799   61500 pod_ready.go:81] duration metric: took 8.506833323s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:18.322808   61500 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:20.328344   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:22.331157   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:21.685033   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:21.688407   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:21.688774   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:21.688809   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:21.689010   62747 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0416 01:00:21.693612   62747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:21.707524   62747 kubeadm.go:877] updating cluster {Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:00:21.707657   62747 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 01:00:21.707699   62747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:21.748697   62747 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 01:00:21.748785   62747 ssh_runner.go:195] Run: which lz4
	I0416 01:00:21.753521   62747 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:00:21.758125   62747 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:00:21.758158   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 01:00:22.545403   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:23.045303   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:23.544984   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:24.045882   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:24.545194   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:25.045010   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:25.545278   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:26.045702   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:26.545233   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:27.045814   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:22.429595   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.430124   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.430159   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:22.430087   63461 retry.go:31] will retry after 495.681984ms: waiting for machine to come up
	I0416 01:00:22.927476   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.927932   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.927959   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:22.927875   63461 retry.go:31] will retry after 506.264557ms: waiting for machine to come up
	I0416 01:00:23.435290   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:23.435742   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:23.435773   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:23.435689   63461 retry.go:31] will retry after 826.359716ms: waiting for machine to come up
	I0416 01:00:24.263672   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:24.264151   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:24.264183   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:24.264107   63461 retry.go:31] will retry after 873.35176ms: waiting for machine to come up
	I0416 01:00:25.138864   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:25.139318   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:25.139340   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:25.139308   63461 retry.go:31] will retry after 1.129546887s: waiting for machine to come up
	I0416 01:00:26.270364   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:26.270968   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:26.271000   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:26.270902   63461 retry.go:31] will retry after 1.441466368s: waiting for machine to come up
	I0416 01:00:24.830562   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:26.832057   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:23.353811   62747 crio.go:462] duration metric: took 1.600325005s to copy over tarball
	I0416 01:00:23.353885   62747 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:00:25.815443   62747 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.46152973s)
	I0416 01:00:25.815479   62747 crio.go:469] duration metric: took 2.461639439s to extract the tarball
	I0416 01:00:25.815489   62747 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:00:25.862653   62747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:25.914416   62747 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 01:00:25.914444   62747 cache_images.go:84] Images are preloaded, skipping loading
	I0416 01:00:25.914454   62747 kubeadm.go:928] updating node { 192.168.61.225 8443 v1.29.3 crio true true} ...
	I0416 01:00:25.914586   62747 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-617092 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 01:00:25.914680   62747 ssh_runner.go:195] Run: crio config
	I0416 01:00:25.970736   62747 cni.go:84] Creating CNI manager for ""
	I0416 01:00:25.970760   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:25.970773   62747 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:00:25.970796   62747 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.225 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-617092 NodeName:embed-certs-617092 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 01:00:25.970949   62747 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-617092"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:00:25.971022   62747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 01:00:25.985111   62747 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:00:25.985198   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:00:25.996306   62747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0416 01:00:26.013401   62747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:00:26.030094   62747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0416 01:00:26.048252   62747 ssh_runner.go:195] Run: grep 192.168.61.225	control-plane.minikube.internal$ /etc/hosts
	I0416 01:00:26.052717   62747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:26.069538   62747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:26.205867   62747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:00:26.224210   62747 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092 for IP: 192.168.61.225
	I0416 01:00:26.224237   62747 certs.go:194] generating shared ca certs ...
	I0416 01:00:26.224259   62747 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:26.224459   62747 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:00:26.224520   62747 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:00:26.224532   62747 certs.go:256] generating profile certs ...
	I0416 01:00:26.224646   62747 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/client.key
	I0416 01:00:26.224723   62747 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/apiserver.key.383097d4
	I0416 01:00:26.224773   62747 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/proxy-client.key
	I0416 01:00:26.224932   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:00:26.224973   62747 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:00:26.224982   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:00:26.225014   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:00:26.225050   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:00:26.225085   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:00:26.225126   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:26.225872   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:00:26.282272   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:00:26.329827   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:00:26.366744   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:00:26.405845   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0416 01:00:26.440535   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 01:00:26.465371   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:00:26.491633   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 01:00:26.518682   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:00:26.543992   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:00:26.573728   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:00:26.602308   62747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:00:26.622491   62747 ssh_runner.go:195] Run: openssl version
	I0416 01:00:26.628805   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:00:26.643163   62747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:26.648292   62747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:26.648351   62747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:26.654890   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:00:26.668501   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:00:26.682038   62747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:00:26.687327   62747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:00:26.687388   62747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:00:26.693557   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:00:26.706161   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:00:26.718432   62747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:00:26.722989   62747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:00:26.723050   62747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:00:26.729311   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:00:26.744138   62747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:00:26.749490   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 01:00:26.756478   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 01:00:26.763326   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 01:00:26.770194   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 01:00:26.776641   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 01:00:26.783022   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 01:00:26.789543   62747 kubeadm.go:391] StartCluster: {Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:00:26.789654   62747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:00:26.789717   62747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:26.831148   62747 cri.go:89] found id: ""
	I0416 01:00:26.831219   62747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 01:00:26.844372   62747 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 01:00:26.844398   62747 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 01:00:26.844403   62747 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 01:00:26.844454   62747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 01:00:26.858173   62747 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 01:00:26.859210   62747 kubeconfig.go:125] found "embed-certs-617092" server: "https://192.168.61.225:8443"
	I0416 01:00:26.861233   62747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 01:00:26.874068   62747 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.225
	I0416 01:00:26.874105   62747 kubeadm.go:1154] stopping kube-system containers ...
	I0416 01:00:26.874119   62747 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 01:00:26.874177   62747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:26.926456   62747 cri.go:89] found id: ""
	I0416 01:00:26.926537   62747 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 01:00:26.945874   62747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:00:26.960207   62747 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:00:26.960229   62747 kubeadm.go:156] found existing configuration files:
	
	I0416 01:00:26.960282   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:00:26.971895   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:00:26.971958   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:00:26.982956   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:00:26.993935   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:00:26.994000   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:00:27.005216   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:00:27.015624   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:00:27.015680   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:00:27.026513   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:00:27.037062   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:00:27.037118   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:00:27.048173   62747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:00:27.061987   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:27.190243   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:27.545025   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:28.045752   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:28.545833   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.045264   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.545316   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:30.045594   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:30.545046   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:31.045139   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:31.545251   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:32.045710   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:27.714372   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:27.714822   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:27.714854   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:27.714767   63461 retry.go:31] will retry after 1.810511131s: waiting for machine to come up
	I0416 01:00:29.527497   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:29.528041   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:29.528072   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:29.527983   63461 retry.go:31] will retry after 2.163921338s: waiting for machine to come up
	I0416 01:00:31.694203   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:31.694741   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:31.694769   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:31.694714   63461 retry.go:31] will retry after 2.245150923s: waiting for machine to come up
	I0416 01:00:29.332159   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:31.332218   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:28.252295   62747 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.062013928s)
	I0416 01:00:28.252331   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:28.468110   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:28.553370   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:28.676185   62747 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:28.676273   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.176826   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.676498   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.702138   62747 api_server.go:72] duration metric: took 1.025950998s to wait for apiserver process to appear ...
	I0416 01:00:29.702170   62747 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:00:29.702192   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:29.702822   62747 api_server.go:269] stopped: https://192.168.61.225:8443/healthz: Get "https://192.168.61.225:8443/healthz": dial tcp 192.168.61.225:8443: connect: connection refused
	I0416 01:00:30.203298   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:32.951714   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:32.951754   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:32.951779   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:33.003631   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:33.003672   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:33.202825   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:33.208168   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:33.208201   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:33.702532   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:33.712501   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:33.712542   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:34.203157   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:34.210567   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:34.210597   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:34.702568   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:34.711690   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 200:
	ok
	I0416 01:00:34.723252   62747 api_server.go:141] control plane version: v1.29.3
	I0416 01:00:34.723279   62747 api_server.go:131] duration metric: took 5.021102658s to wait for apiserver health ...
	I0416 01:00:34.723287   62747 cni.go:84] Creating CNI manager for ""
	I0416 01:00:34.723293   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:34.724989   62747 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:00:32.545963   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.045020   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.545657   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:34.045706   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:34.544972   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:35.045252   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:35.545087   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:36.045080   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:36.545787   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:37.045046   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.942412   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:33.942923   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:33.942952   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:33.942870   63461 retry.go:31] will retry after 3.750613392s: waiting for machine to come up
	I0416 01:00:33.829307   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:35.830613   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:34.726400   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:00:34.746294   62747 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:00:34.767028   62747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:00:34.778610   62747 system_pods.go:59] 8 kube-system pods found
	I0416 01:00:34.778653   62747 system_pods.go:61] "coredns-76f75df574-dxzhk" [a71b29ec-8602-47d6-825c-a1a54a1758d0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:00:34.778664   62747 system_pods.go:61] "etcd-embed-certs-617092" [8966501b-6a06-4e0b-acb6-77df5f53cd3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 01:00:34.778674   62747 system_pods.go:61] "kube-apiserver-embed-certs-617092" [7ad29687-3964-4a5b-8939-bcf3dc71d578] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 01:00:34.778685   62747 system_pods.go:61] "kube-controller-manager-embed-certs-617092" [78b21361-f302-43f3-8356-ea15fad4edb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 01:00:34.778695   62747 system_pods.go:61] "kube-proxy-xtdf4" [4e8fe1da-9a02-428e-94f1-595f2e9170e0] Running
	I0416 01:00:34.778703   62747 system_pods.go:61] "kube-scheduler-embed-certs-617092" [c03d87b4-26d3-4bff-8f53-8844260f1ed8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 01:00:34.778720   62747 system_pods.go:61] "metrics-server-57f55c9bc5-knnvn" [4607d12d-25db-4637-be17-e2665970c0a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:00:34.778729   62747 system_pods.go:61] "storage-provisioner" [41362b6c-fde7-45fa-b6cf-1d7acef3d4ce] Running
	I0416 01:00:34.778741   62747 system_pods.go:74] duration metric: took 11.690083ms to wait for pod list to return data ...
	I0416 01:00:34.778755   62747 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:00:34.782283   62747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:00:34.782319   62747 node_conditions.go:123] node cpu capacity is 2
	I0416 01:00:34.782329   62747 node_conditions.go:105] duration metric: took 3.566074ms to run NodePressure ...
	I0416 01:00:34.782344   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:35.056194   62747 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 01:00:35.068546   62747 kubeadm.go:733] kubelet initialised
	I0416 01:00:35.068571   62747 kubeadm.go:734] duration metric: took 12.345347ms waiting for restarted kubelet to initialise ...
	I0416 01:00:35.068581   62747 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:00:35.075013   62747 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-dxzhk" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:37.081976   62747 pod_ready.go:102] pod "coredns-76f75df574-dxzhk" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:37.697323   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.697830   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has current primary IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.697857   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Found IP for machine: 192.168.50.216
	I0416 01:00:37.697873   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Reserving static IP address...
	I0416 01:00:37.698323   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Reserved static IP address: 192.168.50.216
	I0416 01:00:37.698345   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for SSH to be available...
	I0416 01:00:37.698372   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-653942", mac: "52:54:00:4b:a2:47", ip: "192.168.50.216"} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.698418   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | skip adding static IP to network mk-default-k8s-diff-port-653942 - found existing host DHCP lease matching {name: "default-k8s-diff-port-653942", mac: "52:54:00:4b:a2:47", ip: "192.168.50.216"}
	I0416 01:00:37.698450   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Getting to WaitForSSH function...
	I0416 01:00:37.700942   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.701312   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.701346   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.701520   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Using SSH client type: external
	I0416 01:00:37.701567   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa (-rw-------)
	I0416 01:00:37.701621   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 01:00:37.701676   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | About to run SSH command:
	I0416 01:00:37.701712   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | exit 0
	I0416 01:00:37.829860   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | SSH cmd err, output: <nil>: 
	I0416 01:00:37.830254   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetConfigRaw
	I0416 01:00:37.830931   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:37.833361   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.833755   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.833788   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.834026   61267 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/config.json ...
	I0416 01:00:37.834198   61267 machine.go:94] provisionDockerMachine start ...
	I0416 01:00:37.834214   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:37.834426   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:37.836809   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.837221   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.837251   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.837377   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:37.837588   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.837737   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.837869   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:37.838023   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:37.838208   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:37.838219   61267 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 01:00:37.950999   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 01:00:37.951031   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 01:00:37.951271   61267 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653942"
	I0416 01:00:37.951303   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 01:00:37.951483   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:37.954395   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.954730   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.954755   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.954949   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:37.955165   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.955344   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.955549   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:37.955756   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:37.955980   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:37.956001   61267 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-653942 && echo "default-k8s-diff-port-653942" | sudo tee /etc/hostname
	I0416 01:00:38.085650   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-653942
	
	I0416 01:00:38.085682   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.088689   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.089031   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.089060   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.089297   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.089474   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.089623   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.089780   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.089948   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:38.090127   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:38.090146   61267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-653942' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-653942/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-653942' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 01:00:38.214653   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 01:00:38.214734   61267 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 01:00:38.214760   61267 buildroot.go:174] setting up certificates
	I0416 01:00:38.214773   61267 provision.go:84] configureAuth start
	I0416 01:00:38.214785   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 01:00:38.215043   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:38.217744   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.218145   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.218174   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.218336   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.220861   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.221187   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.221216   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.221343   61267 provision.go:143] copyHostCerts
	I0416 01:00:38.221405   61267 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 01:00:38.221426   61267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 01:00:38.221492   61267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 01:00:38.221638   61267 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 01:00:38.221649   61267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 01:00:38.221685   61267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 01:00:38.221777   61267 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 01:00:38.221787   61267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 01:00:38.221815   61267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 01:00:38.221887   61267 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-653942 san=[127.0.0.1 192.168.50.216 default-k8s-diff-port-653942 localhost minikube]
	I0416 01:00:38.266327   61267 provision.go:177] copyRemoteCerts
	I0416 01:00:38.266390   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 01:00:38.266422   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.269080   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.269546   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.269583   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.269901   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.270115   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.270259   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.270444   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:38.352861   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 01:00:38.380995   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0416 01:00:38.405746   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 01:00:38.431467   61267 provision.go:87] duration metric: took 216.680985ms to configureAuth
	I0416 01:00:38.431502   61267 buildroot.go:189] setting minikube options for container-runtime
	I0416 01:00:38.431674   61267 config.go:182] Loaded profile config "default-k8s-diff-port-653942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:00:38.431740   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.434444   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.434867   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.434909   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.435032   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.435245   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.435380   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.435568   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.435744   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:38.435948   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:38.435974   61267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 01:00:38.729392   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 01:00:38.729421   61267 machine.go:97] duration metric: took 895.211347ms to provisionDockerMachine
	I0416 01:00:38.729432   61267 start.go:293] postStartSetup for "default-k8s-diff-port-653942" (driver="kvm2")
	I0416 01:00:38.729442   61267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 01:00:38.729463   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.729802   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 01:00:38.729826   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.732755   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.733135   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.733181   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.733326   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.733490   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.733649   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.733784   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:38.819006   61267 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 01:00:38.823781   61267 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 01:00:38.823804   61267 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 01:00:38.823870   61267 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 01:00:38.823967   61267 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 01:00:38.824077   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 01:00:38.833958   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:38.859934   61267 start.go:296] duration metric: took 130.488205ms for postStartSetup
	I0416 01:00:38.859973   61267 fix.go:56] duration metric: took 18.845458863s for fixHost
	I0416 01:00:38.859992   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.862557   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.862889   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.862927   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.863016   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.863236   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.863426   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.863609   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.863786   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:38.863951   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:38.863961   61267 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 01:00:38.970405   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229238.936521840
	
	I0416 01:00:38.970431   61267 fix.go:216] guest clock: 1713229238.936521840
	I0416 01:00:38.970440   61267 fix.go:229] Guest: 2024-04-16 01:00:38.93652184 +0000 UTC Remote: 2024-04-16 01:00:38.859976379 +0000 UTC m=+356.490123424 (delta=76.545461ms)
	I0416 01:00:38.970489   61267 fix.go:200] guest clock delta is within tolerance: 76.545461ms
	I0416 01:00:38.970496   61267 start.go:83] releasing machines lock for "default-k8s-diff-port-653942", held for 18.956013216s
	I0416 01:00:38.970522   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.970806   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:38.973132   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.973440   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.973455   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.973646   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.974142   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.974332   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.974388   61267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 01:00:38.974432   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.974532   61267 ssh_runner.go:195] Run: cat /version.json
	I0416 01:00:38.974556   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.977284   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977459   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977624   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.977653   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977746   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.977774   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977800   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.978002   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.978017   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.978163   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.978169   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.978296   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:38.978314   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.978440   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:39.090827   61267 ssh_runner.go:195] Run: systemctl --version
	I0416 01:00:39.097716   61267 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 01:00:39.249324   61267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 01:00:39.256333   61267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 01:00:39.256402   61267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 01:00:39.272367   61267 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 01:00:39.272395   61267 start.go:494] detecting cgroup driver to use...
	I0416 01:00:39.272446   61267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 01:00:39.291713   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 01:00:39.305645   61267 docker.go:217] disabling cri-docker service (if available) ...
	I0416 01:00:39.305708   61267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 01:00:39.320731   61267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 01:00:39.336917   61267 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 01:00:39.450840   61267 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:00:39.596905   61267 docker.go:233] disabling docker service ...
	I0416 01:00:39.596972   61267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:00:39.612926   61267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:00:39.627583   61267 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:00:39.778135   61267 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:00:39.900216   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:00:39.914697   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:00:39.935875   61267 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 01:00:39.935930   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.946510   61267 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:00:39.946569   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.956794   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.966968   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.977207   61267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:00:39.988817   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:40.001088   61267 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:40.018950   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:40.030395   61267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:00:40.039956   61267 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:00:40.040013   61267 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:00:40.053877   61267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:00:40.065292   61267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:40.221527   61267 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:00:40.382800   61267 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:00:40.382880   61267 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:00:40.387842   61267 start.go:562] Will wait 60s for crictl version
	I0416 01:00:40.387897   61267 ssh_runner.go:195] Run: which crictl
	I0416 01:00:40.393774   61267 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:00:40.435784   61267 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:00:40.435864   61267 ssh_runner.go:195] Run: crio --version
	I0416 01:00:40.468702   61267 ssh_runner.go:195] Run: crio --version
	I0416 01:00:40.501355   61267 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 01:00:37.545192   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:38.045346   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:38.545599   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:39.045109   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:39.545360   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.045058   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.545745   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:41.045943   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:41.545900   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:42.045807   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.502716   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:40.505958   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:40.506353   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:40.506384   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:40.506597   61267 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0416 01:00:40.511238   61267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:40.525378   61267 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-653942 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-653942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.216 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:00:40.525519   61267 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 01:00:40.525586   61267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:40.570378   61267 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 01:00:40.570451   61267 ssh_runner.go:195] Run: which lz4
	I0416 01:00:40.575413   61267 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:00:40.580583   61267 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:00:40.580640   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 01:00:42.194745   61267 crio.go:462] duration metric: took 1.619375861s to copy over tarball
	I0416 01:00:42.194821   61267 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:00:37.830710   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:39.831822   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:42.330821   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:39.086761   62747 pod_ready.go:102] pod "coredns-76f75df574-dxzhk" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:40.082847   62747 pod_ready.go:92] pod "coredns-76f75df574-dxzhk" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:40.082868   62747 pod_ready.go:81] duration metric: took 5.007825454s for pod "coredns-76f75df574-dxzhk" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:40.082877   62747 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:42.092402   62747 pod_ready.go:92] pod "etcd-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:42.092425   62747 pod_ready.go:81] duration metric: took 2.009541778s for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:42.092438   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:42.545278   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:43.045894   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:43.545886   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.044964   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.544997   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:45.045340   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:45.545257   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:46.045108   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:46.544994   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:47.045987   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.671272   61267 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.476407392s)
	I0416 01:00:44.671304   61267 crio.go:469] duration metric: took 2.476532286s to extract the tarball
	I0416 01:00:44.671315   61267 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:00:44.709451   61267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:44.754382   61267 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 01:00:44.754412   61267 cache_images.go:84] Images are preloaded, skipping loading
	I0416 01:00:44.754424   61267 kubeadm.go:928] updating node { 192.168.50.216 8444 v1.29.3 crio true true} ...
	I0416 01:00:44.754543   61267 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-653942 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-653942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 01:00:44.754613   61267 ssh_runner.go:195] Run: crio config
	I0416 01:00:44.806896   61267 cni.go:84] Creating CNI manager for ""
	I0416 01:00:44.806918   61267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:44.806926   61267 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:00:44.806957   61267 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.216 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-653942 NodeName:default-k8s-diff-port-653942 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 01:00:44.807089   61267 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.216
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-653942"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.216
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.216"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:00:44.807144   61267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 01:00:44.821347   61267 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:00:44.821425   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:00:44.835415   61267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0416 01:00:44.855797   61267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:00:44.873694   61267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0416 01:00:44.892535   61267 ssh_runner.go:195] Run: grep 192.168.50.216	control-plane.minikube.internal$ /etc/hosts
	I0416 01:00:44.896538   61267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:44.909516   61267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:45.024588   61267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:00:45.055414   61267 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942 for IP: 192.168.50.216
	I0416 01:00:45.055440   61267 certs.go:194] generating shared ca certs ...
	I0416 01:00:45.055460   61267 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:45.055622   61267 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:00:45.055680   61267 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:00:45.055695   61267 certs.go:256] generating profile certs ...
	I0416 01:00:45.055815   61267 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.key
	I0416 01:00:45.055905   61267 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/apiserver.key.6620f6bf
	I0416 01:00:45.055975   61267 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/proxy-client.key
	I0416 01:00:45.056139   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:00:45.056185   61267 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:00:45.056195   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:00:45.056234   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:00:45.056268   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:00:45.056295   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:00:45.056355   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:45.057033   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:00:45.091704   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:00:45.154257   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:00:45.181077   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:00:45.222401   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0416 01:00:45.248568   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 01:00:45.277927   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:00:45.310417   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 01:00:45.341109   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:00:45.367056   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:00:45.395117   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:00:45.421921   61267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:00:45.440978   61267 ssh_runner.go:195] Run: openssl version
	I0416 01:00:45.447132   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:00:45.460008   61267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:00:45.464820   61267 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:00:45.464884   61267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:00:45.471232   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:00:45.482567   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:00:45.493541   61267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:45.498792   61267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:45.498849   61267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:45.505511   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:00:45.517533   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:00:45.529908   61267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:00:45.535120   61267 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:00:45.535181   61267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:00:45.541232   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:00:45.552946   61267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:00:45.559947   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 01:00:45.567567   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 01:00:45.575204   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 01:00:45.582057   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 01:00:45.588418   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 01:00:45.595517   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 01:00:45.602108   61267 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-653942 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-653942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.216 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:00:45.602213   61267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:00:45.602256   61267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:45.639538   61267 cri.go:89] found id: ""
	I0416 01:00:45.639621   61267 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 01:00:45.651216   61267 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 01:00:45.651245   61267 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 01:00:45.651252   61267 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 01:00:45.651307   61267 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 01:00:45.662522   61267 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 01:00:45.663697   61267 kubeconfig.go:125] found "default-k8s-diff-port-653942" server: "https://192.168.50.216:8444"
	I0416 01:00:45.666034   61267 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 01:00:45.675864   61267 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.216
	I0416 01:00:45.675900   61267 kubeadm.go:1154] stopping kube-system containers ...
	I0416 01:00:45.675927   61267 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 01:00:45.675992   61267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:45.718679   61267 cri.go:89] found id: ""
	I0416 01:00:45.718744   61267 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 01:00:45.737326   61267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:00:45.748122   61267 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:00:45.748146   61267 kubeadm.go:156] found existing configuration files:
	
	I0416 01:00:45.748200   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0416 01:00:45.758556   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:00:45.758618   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:00:45.769601   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0416 01:00:45.779361   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:00:45.779424   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:00:45.789283   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0416 01:00:45.798712   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:00:45.798805   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:00:45.808489   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0416 01:00:45.817400   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:00:45.817469   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:00:45.827902   61267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:00:45.838031   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:45.962948   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:46.862340   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:47.092144   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:47.170078   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:47.284634   61267 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:47.284719   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.830534   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:47.474148   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:44.100441   62747 pod_ready.go:102] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:47.472666   62747 pod_ready.go:102] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:47.599694   62747 pod_ready.go:92] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.599722   62747 pod_ready.go:81] duration metric: took 5.507276982s for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.599734   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.604479   62747 pod_ready.go:92] pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.604496   62747 pod_ready.go:81] duration metric: took 4.755735ms for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.604504   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xtdf4" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.608936   62747 pod_ready.go:92] pod "kube-proxy-xtdf4" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.608951   62747 pod_ready.go:81] duration metric: took 4.441482ms for pod "kube-proxy-xtdf4" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.608959   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.613108   62747 pod_ready.go:92] pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.613123   62747 pod_ready.go:81] duration metric: took 4.157722ms for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.613130   62747 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.545567   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.045898   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.545631   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:49.045678   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:49.545274   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:50.045281   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:50.545926   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:51.045076   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:51.545303   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:52.045271   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:47.785698   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.284828   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.315894   61267 api_server.go:72] duration metric: took 1.031258915s to wait for apiserver process to appear ...
	I0416 01:00:48.315925   61267 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:00:48.315950   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:51.781922   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:51.781957   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:51.781976   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:51.830460   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:51.830491   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:51.830505   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:51.858205   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:51.858240   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:52.316376   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:52.332667   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:52.332700   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:49.829236   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:52.329805   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:49.620626   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:51.620730   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:52.816565   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:52.827158   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:52.827191   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:53.316864   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:53.321112   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 200:
	ok
	I0416 01:00:53.329289   61267 api_server.go:141] control plane version: v1.29.3
	I0416 01:00:53.329320   61267 api_server.go:131] duration metric: took 5.013387579s to wait for apiserver health ...
	I0416 01:00:53.329331   61267 cni.go:84] Creating CNI manager for ""
	I0416 01:00:53.329340   61267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:53.331125   61267 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:00:52.545407   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.044961   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.545290   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:54.044994   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:54.545292   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:55.045285   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:55.545909   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:56.045029   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:56.545343   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:57.044988   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.332626   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:00:53.366364   61267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:00:53.401881   61267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:00:53.413478   61267 system_pods.go:59] 8 kube-system pods found
	I0416 01:00:53.413512   61267 system_pods.go:61] "coredns-76f75df574-cvlpq" [c200d470-26dd-40ea-a79b-29d9104122bb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:00:53.413527   61267 system_pods.go:61] "etcd-default-k8s-diff-port-653942" [24e85fc2-fb57-4ef6-9817-846207109e61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 01:00:53.413537   61267 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653942" [bd473e94-72a6-4391-b787-49e16e8a213f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 01:00:53.413547   61267 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653942" [31ed7183-a12b-422c-9e67-bba91147347a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 01:00:53.413555   61267 system_pods.go:61] "kube-proxy-6q9k7" [ba6d9cf9-37a5-4e01-9489-ce7395fd2a38] Running
	I0416 01:00:53.413563   61267 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653942" [4b481275-4ded-4251-963f-910954f10d15] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 01:00:53.413579   61267 system_pods.go:61] "metrics-server-57f55c9bc5-9cnv2" [24905ded-5bf8-4b34-8069-2e65c5ad8f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:00:53.413592   61267 system_pods.go:61] "storage-provisioner" [16ba28d0-2031-4c21-9c22-1b9289517449] Running
	I0416 01:00:53.413601   61267 system_pods.go:74] duration metric: took 11.695334ms to wait for pod list to return data ...
	I0416 01:00:53.413613   61267 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:00:53.417579   61267 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:00:53.417609   61267 node_conditions.go:123] node cpu capacity is 2
	I0416 01:00:53.417623   61267 node_conditions.go:105] duration metric: took 4.002735ms to run NodePressure ...
	I0416 01:00:53.417642   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:53.688389   61267 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 01:00:53.692755   61267 kubeadm.go:733] kubelet initialised
	I0416 01:00:53.692777   61267 kubeadm.go:734] duration metric: took 4.359298ms waiting for restarted kubelet to initialise ...
	I0416 01:00:53.692784   61267 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:00:53.698521   61267 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-cvlpq" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.704496   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "coredns-76f75df574-cvlpq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.704532   61267 pod_ready.go:81] duration metric: took 5.98382ms for pod "coredns-76f75df574-cvlpq" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.704543   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "coredns-76f75df574-cvlpq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.704550   61267 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.713110   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.713144   61267 pod_ready.go:81] duration metric: took 8.58568ms for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.713188   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.713201   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.718190   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.718210   61267 pod_ready.go:81] duration metric: took 4.997527ms for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.718219   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.718224   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.805697   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.805727   61267 pod_ready.go:81] duration metric: took 87.493805ms for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.805738   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.805743   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6q9k7" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:54.205884   61267 pod_ready.go:92] pod "kube-proxy-6q9k7" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:54.205911   61267 pod_ready.go:81] duration metric: took 400.161115ms for pod "kube-proxy-6q9k7" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:54.205921   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:56.213276   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:54.829391   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:57.330218   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:54.119995   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:56.121220   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:57.545333   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.045305   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.545871   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:59.045432   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:59.545000   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:00.045001   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:00.545855   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:01.045812   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:01.545477   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:02.045635   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.215064   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:00.215192   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:59.330599   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:01.831017   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:58.620594   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:01.120516   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:02.545690   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:03.045754   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:03.544965   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:04.045062   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:04.545196   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:05.045986   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:05.545246   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:06.045853   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:06.545863   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:07.045209   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:02.712971   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:04.713437   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:07.212886   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:04.328673   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:06.329726   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:03.124343   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:05.619912   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:07.622044   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:07.544952   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:08.045290   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:08.545296   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:09.045795   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:09.545932   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:10.045124   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:10.045209   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:10.087200   62139 cri.go:89] found id: ""
	I0416 01:01:10.087229   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.087237   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:10.087243   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:10.087300   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:10.126194   62139 cri.go:89] found id: ""
	I0416 01:01:10.126218   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.126225   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:10.126230   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:10.126275   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:10.165238   62139 cri.go:89] found id: ""
	I0416 01:01:10.165271   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.165282   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:10.165290   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:10.165357   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:10.202896   62139 cri.go:89] found id: ""
	I0416 01:01:10.202934   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.202945   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:10.202952   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:10.203015   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:10.243576   62139 cri.go:89] found id: ""
	I0416 01:01:10.243605   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.243613   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:10.243619   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:10.243667   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:10.278637   62139 cri.go:89] found id: ""
	I0416 01:01:10.278661   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.278669   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:10.278674   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:10.278726   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:10.316811   62139 cri.go:89] found id: ""
	I0416 01:01:10.316844   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.316852   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:10.316857   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:10.316914   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:10.359934   62139 cri.go:89] found id: ""
	I0416 01:01:10.359960   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.359967   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:10.359975   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:10.359987   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:10.413082   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:10.413119   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:10.428605   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:10.428632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:10.552536   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:10.552561   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:10.552578   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:10.615054   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:10.615091   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:08.213557   61267 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:01:08.213584   61267 pod_ready.go:81] duration metric: took 14.007657025s for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:01:08.213594   61267 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace to be "Ready" ...
	I0416 01:01:10.224984   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:08.831515   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:11.330529   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:10.122213   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:12.621939   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:13.160749   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:13.178449   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:13.178505   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:13.224192   62139 cri.go:89] found id: ""
	I0416 01:01:13.224215   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.224222   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:13.224228   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:13.224287   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:13.261441   62139 cri.go:89] found id: ""
	I0416 01:01:13.261469   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.261476   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:13.261481   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:13.261545   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:13.296602   62139 cri.go:89] found id: ""
	I0416 01:01:13.296636   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.296647   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:13.296654   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:13.296720   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:13.333944   62139 cri.go:89] found id: ""
	I0416 01:01:13.333968   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.333977   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:13.333984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:13.334049   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:13.372919   62139 cri.go:89] found id: ""
	I0416 01:01:13.372944   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.372957   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:13.372965   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:13.373022   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:13.413257   62139 cri.go:89] found id: ""
	I0416 01:01:13.413287   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.413299   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:13.413306   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:13.413373   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:13.451705   62139 cri.go:89] found id: ""
	I0416 01:01:13.451737   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.451748   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:13.451755   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:13.451836   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:13.492549   62139 cri.go:89] found id: ""
	I0416 01:01:13.492576   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.492586   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:13.492597   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:13.492613   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:13.547267   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:13.547303   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:13.568975   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:13.569002   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:13.674444   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:13.674469   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:13.674482   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:13.745111   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:13.745145   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:16.286955   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:16.301151   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:16.301257   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:16.337516   62139 cri.go:89] found id: ""
	I0416 01:01:16.337544   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.337554   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:16.337561   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:16.337623   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:16.372674   62139 cri.go:89] found id: ""
	I0416 01:01:16.372702   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.372712   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:16.372720   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:16.372783   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:16.411181   62139 cri.go:89] found id: ""
	I0416 01:01:16.411208   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.411224   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:16.411230   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:16.411283   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:16.449063   62139 cri.go:89] found id: ""
	I0416 01:01:16.449102   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.449109   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:16.449114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:16.449183   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:16.491877   62139 cri.go:89] found id: ""
	I0416 01:01:16.491909   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.491918   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:16.491924   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:16.491981   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:16.532522   62139 cri.go:89] found id: ""
	I0416 01:01:16.532553   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.532564   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:16.532572   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:16.532633   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:16.572194   62139 cri.go:89] found id: ""
	I0416 01:01:16.572222   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.572233   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:16.572240   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:16.572302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:16.614671   62139 cri.go:89] found id: ""
	I0416 01:01:16.614697   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.614704   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:16.614712   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:16.614726   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:16.632146   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:16.632179   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:16.707597   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:16.707621   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:16.707633   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:16.783604   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:16.783640   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:16.828937   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:16.828977   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:12.721088   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:15.220256   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:17.222263   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:13.830983   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:16.329120   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:15.119386   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:17.120038   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:19.385008   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:19.400949   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:19.401035   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:19.463792   62139 cri.go:89] found id: ""
	I0416 01:01:19.463825   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.463836   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:19.463843   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:19.463910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:19.523289   62139 cri.go:89] found id: ""
	I0416 01:01:19.523322   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.523332   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:19.523340   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:19.523392   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:19.558891   62139 cri.go:89] found id: ""
	I0416 01:01:19.558928   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.558939   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:19.558946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:19.559009   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:19.597876   62139 cri.go:89] found id: ""
	I0416 01:01:19.597905   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.597917   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:19.597925   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:19.597980   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:19.637536   62139 cri.go:89] found id: ""
	I0416 01:01:19.637563   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.637571   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:19.637576   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:19.637623   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:19.674414   62139 cri.go:89] found id: ""
	I0416 01:01:19.674447   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.674458   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:19.674465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:19.674525   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:19.709717   62139 cri.go:89] found id: ""
	I0416 01:01:19.709751   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.709761   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:19.709769   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:19.709837   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:19.747458   62139 cri.go:89] found id: ""
	I0416 01:01:19.747482   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.747489   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:19.747505   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:19.747523   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:19.834811   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:19.834846   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:19.876398   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:19.876428   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:19.931596   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:19.931632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:19.947074   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:19.947103   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:20.023434   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:19.720883   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:21.721969   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:18.829276   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:20.829405   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:19.120254   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:21.120520   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:22.524036   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:22.539399   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:22.539488   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:22.574696   62139 cri.go:89] found id: ""
	I0416 01:01:22.574723   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.574733   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:22.574741   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:22.574805   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:22.617474   62139 cri.go:89] found id: ""
	I0416 01:01:22.617503   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.617514   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:22.617521   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:22.617579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:22.657744   62139 cri.go:89] found id: ""
	I0416 01:01:22.657773   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.657781   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:22.657786   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:22.657842   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:22.695513   62139 cri.go:89] found id: ""
	I0416 01:01:22.695544   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.695552   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:22.695557   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:22.695606   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:22.732943   62139 cri.go:89] found id: ""
	I0416 01:01:22.732973   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.732983   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:22.732990   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:22.733051   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:22.768735   62139 cri.go:89] found id: ""
	I0416 01:01:22.768767   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.768775   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:22.768782   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:22.768842   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:22.804330   62139 cri.go:89] found id: ""
	I0416 01:01:22.804352   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.804361   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:22.804367   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:22.804425   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:22.842165   62139 cri.go:89] found id: ""
	I0416 01:01:22.842192   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.842199   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:22.842207   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:22.842219   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:22.921859   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:22.921880   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:22.921893   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:23.003432   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:23.003468   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:23.045446   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:23.045476   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:23.097327   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:23.097358   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:25.612297   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:25.627489   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:25.627565   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:25.664040   62139 cri.go:89] found id: ""
	I0416 01:01:25.664072   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.664083   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:25.664091   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:25.664149   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:25.701004   62139 cri.go:89] found id: ""
	I0416 01:01:25.701029   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.701036   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:25.701042   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:25.701087   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:25.740108   62139 cri.go:89] found id: ""
	I0416 01:01:25.740136   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.740144   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:25.740150   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:25.740194   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:25.778413   62139 cri.go:89] found id: ""
	I0416 01:01:25.778447   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.778458   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:25.778465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:25.778530   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:25.815188   62139 cri.go:89] found id: ""
	I0416 01:01:25.815215   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.815223   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:25.815230   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:25.815277   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:25.856370   62139 cri.go:89] found id: ""
	I0416 01:01:25.856402   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.856410   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:25.856416   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:25.856476   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:25.895363   62139 cri.go:89] found id: ""
	I0416 01:01:25.895388   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.895396   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:25.895402   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:25.895455   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:25.931854   62139 cri.go:89] found id: ""
	I0416 01:01:25.931881   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.931889   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:25.931897   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:25.931923   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:26.008395   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:26.008419   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:26.008436   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:26.087946   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:26.087983   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:26.134693   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:26.134725   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:26.189618   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:26.189652   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:24.220798   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:26.221193   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:22.833917   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:25.331147   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:27.331702   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:23.620819   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:25.621119   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:28.705010   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:28.719575   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:28.719644   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:28.759011   62139 cri.go:89] found id: ""
	I0416 01:01:28.759037   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.759044   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:28.759050   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:28.759112   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:28.794640   62139 cri.go:89] found id: ""
	I0416 01:01:28.794675   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.794687   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:28.794695   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:28.794807   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:28.835634   62139 cri.go:89] found id: ""
	I0416 01:01:28.835663   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.835674   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:28.835681   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:28.835747   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:28.875384   62139 cri.go:89] found id: ""
	I0416 01:01:28.875408   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.875426   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:28.875433   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:28.875484   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:28.921202   62139 cri.go:89] found id: ""
	I0416 01:01:28.921234   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.921244   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:28.921252   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:28.921314   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:28.958791   62139 cri.go:89] found id: ""
	I0416 01:01:28.958820   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.958828   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:28.958834   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:28.958923   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:28.996136   62139 cri.go:89] found id: ""
	I0416 01:01:28.996168   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.996179   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:28.996185   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:28.996259   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:29.033912   62139 cri.go:89] found id: ""
	I0416 01:01:29.033939   62139 logs.go:276] 0 containers: []
	W0416 01:01:29.033946   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:29.033954   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:29.033969   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:29.114162   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:29.114209   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:29.153934   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:29.153965   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:29.207548   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:29.207584   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:29.222158   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:29.222184   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:29.297414   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:31.798026   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:31.812740   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:31.812815   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:31.855058   62139 cri.go:89] found id: ""
	I0416 01:01:31.855087   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.855098   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:31.855105   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:31.855172   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:31.897128   62139 cri.go:89] found id: ""
	I0416 01:01:31.897170   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.897192   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:31.897200   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:31.897259   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:31.934497   62139 cri.go:89] found id: ""
	I0416 01:01:31.934520   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.934532   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:31.934541   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:31.934588   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:31.974020   62139 cri.go:89] found id: ""
	I0416 01:01:31.974051   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.974062   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:31.974093   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:31.974163   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:32.015433   62139 cri.go:89] found id: ""
	I0416 01:01:32.015460   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.015471   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:32.015477   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:32.015540   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:32.058286   62139 cri.go:89] found id: ""
	I0416 01:01:32.058336   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.058345   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:32.058351   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:32.058408   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:28.720596   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:30.720732   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:29.828996   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:31.830765   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:28.121038   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:30.619604   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:32.620210   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:32.100331   62139 cri.go:89] found id: ""
	I0416 01:01:32.102041   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.102054   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:32.102061   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:32.102115   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:32.141420   62139 cri.go:89] found id: ""
	I0416 01:01:32.141446   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.141454   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:32.141462   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:32.141473   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:32.195323   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:32.195364   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:32.210180   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:32.210206   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:32.282548   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:32.282570   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:32.282585   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:32.360627   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:32.360663   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:34.901239   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:34.917097   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:34.917205   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:34.959297   62139 cri.go:89] found id: ""
	I0416 01:01:34.959327   62139 logs.go:276] 0 containers: []
	W0416 01:01:34.959337   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:34.959344   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:34.959422   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:35.000927   62139 cri.go:89] found id: ""
	I0416 01:01:35.000974   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.000984   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:35.001000   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:35.001064   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:35.038049   62139 cri.go:89] found id: ""
	I0416 01:01:35.038073   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.038082   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:35.038090   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:35.038143   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:35.075396   62139 cri.go:89] found id: ""
	I0416 01:01:35.075467   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.075481   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:35.075490   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:35.075591   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:35.114297   62139 cri.go:89] found id: ""
	I0416 01:01:35.114325   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.114335   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:35.114343   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:35.114405   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:35.152075   62139 cri.go:89] found id: ""
	I0416 01:01:35.152099   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.152106   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:35.152112   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:35.152161   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:35.187945   62139 cri.go:89] found id: ""
	I0416 01:01:35.187974   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.187984   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:35.187991   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:35.188057   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:35.225225   62139 cri.go:89] found id: ""
	I0416 01:01:35.225253   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.225262   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:35.225272   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:35.225287   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:35.279584   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:35.279628   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:35.293416   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:35.293456   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:35.370122   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:35.370147   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:35.370159   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:35.451482   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:35.451517   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:32.723226   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:35.221390   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:34.329009   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:36.329761   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:34.620492   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:36.620527   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:37.994358   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:38.008209   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:38.008277   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:38.047905   62139 cri.go:89] found id: ""
	I0416 01:01:38.047943   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.047955   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:38.047962   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:38.048016   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:38.085749   62139 cri.go:89] found id: ""
	I0416 01:01:38.085780   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.085790   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:38.085797   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:38.085864   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:38.122396   62139 cri.go:89] found id: ""
	I0416 01:01:38.122419   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.122427   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:38.122432   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:38.122479   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:38.159284   62139 cri.go:89] found id: ""
	I0416 01:01:38.159313   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.159322   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:38.159329   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:38.159390   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:38.193245   62139 cri.go:89] found id: ""
	I0416 01:01:38.193280   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.193291   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:38.193298   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:38.193362   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:38.229147   62139 cri.go:89] found id: ""
	I0416 01:01:38.229179   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.229188   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:38.229194   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:38.229251   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:38.267285   62139 cri.go:89] found id: ""
	I0416 01:01:38.267309   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.267317   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:38.267321   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:38.267389   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:38.305181   62139 cri.go:89] found id: ""
	I0416 01:01:38.305207   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.305215   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:38.305222   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:38.305237   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:38.321714   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:38.321742   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:38.398352   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:38.398372   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:38.398382   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:38.474095   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:38.474129   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:38.520540   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:38.520581   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:41.072083   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:41.086767   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:41.086860   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:41.125119   62139 cri.go:89] found id: ""
	I0416 01:01:41.125149   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.125175   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:41.125182   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:41.125253   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:41.159885   62139 cri.go:89] found id: ""
	I0416 01:01:41.159915   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.159925   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:41.159931   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:41.160012   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:41.196334   62139 cri.go:89] found id: ""
	I0416 01:01:41.196366   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.196377   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:41.196385   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:41.196447   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:41.234254   62139 cri.go:89] found id: ""
	I0416 01:01:41.234282   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.234300   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:41.234319   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:41.234413   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:41.271499   62139 cri.go:89] found id: ""
	I0416 01:01:41.271523   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.271531   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:41.271536   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:41.271604   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:41.311064   62139 cri.go:89] found id: ""
	I0416 01:01:41.311096   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.311107   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:41.311114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:41.311179   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:41.349012   62139 cri.go:89] found id: ""
	I0416 01:01:41.349043   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.349053   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:41.349060   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:41.349117   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:41.385258   62139 cri.go:89] found id: ""
	I0416 01:01:41.385298   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.385305   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:41.385315   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:41.385330   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:41.470086   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:41.470130   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:41.513835   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:41.513870   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:41.565980   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:41.566013   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:41.582647   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:41.582678   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:41.658928   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:37.724628   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:40.222025   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:38.329899   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:40.330143   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:39.120850   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:41.121383   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:44.159107   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:44.173015   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:44.173088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:44.214310   62139 cri.go:89] found id: ""
	I0416 01:01:44.214345   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.214363   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:44.214374   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:44.214462   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:44.256476   62139 cri.go:89] found id: ""
	I0416 01:01:44.256503   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.256511   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:44.256516   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:44.256577   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:44.298047   62139 cri.go:89] found id: ""
	I0416 01:01:44.298079   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.298089   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:44.298097   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:44.298158   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:44.339165   62139 cri.go:89] found id: ""
	I0416 01:01:44.339196   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.339206   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:44.339213   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:44.339280   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:44.378078   62139 cri.go:89] found id: ""
	I0416 01:01:44.378108   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.378116   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:44.378122   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:44.378170   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:44.421494   62139 cri.go:89] found id: ""
	I0416 01:01:44.421525   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.421536   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:44.421543   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:44.421609   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:44.459919   62139 cri.go:89] found id: ""
	I0416 01:01:44.459948   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.459958   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:44.459965   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:44.460025   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:44.499448   62139 cri.go:89] found id: ""
	I0416 01:01:44.499479   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.499489   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:44.499500   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:44.499516   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:44.555122   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:44.555159   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:44.572048   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:44.572075   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:44.646252   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:44.646283   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:44.646299   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:44.730593   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:44.730620   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:42.720855   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:44.723141   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:46.723452   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:42.831045   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:45.329039   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:47.331355   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:43.619897   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:45.620068   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:47.620162   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:47.276658   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:47.291354   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:47.291431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:47.334998   62139 cri.go:89] found id: ""
	I0416 01:01:47.335036   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.335055   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:47.335062   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:47.335121   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:47.376546   62139 cri.go:89] found id: ""
	I0416 01:01:47.376575   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.376582   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:47.376587   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:47.376647   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:47.418609   62139 cri.go:89] found id: ""
	I0416 01:01:47.418642   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.418654   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:47.418661   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:47.418721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:47.459432   62139 cri.go:89] found id: ""
	I0416 01:01:47.459458   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.459465   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:47.459470   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:47.459518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:47.497776   62139 cri.go:89] found id: ""
	I0416 01:01:47.497800   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.497808   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:47.497813   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:47.497866   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:47.536803   62139 cri.go:89] found id: ""
	I0416 01:01:47.536835   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.536842   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:47.536849   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:47.536916   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:47.575883   62139 cri.go:89] found id: ""
	I0416 01:01:47.575916   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.575923   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:47.575931   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:47.575976   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:47.627676   62139 cri.go:89] found id: ""
	I0416 01:01:47.627697   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.627703   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:47.627711   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:47.627725   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:47.669714   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:47.669745   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:47.721349   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:47.721389   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:47.735833   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:47.735859   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:47.806890   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:47.806913   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:47.806925   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:50.386960   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:50.400832   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:50.400901   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:50.443042   62139 cri.go:89] found id: ""
	I0416 01:01:50.443076   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.443086   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:50.443094   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:50.443157   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:50.480495   62139 cri.go:89] found id: ""
	I0416 01:01:50.480526   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.480536   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:50.480544   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:50.480602   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:50.516578   62139 cri.go:89] found id: ""
	I0416 01:01:50.516605   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.516613   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:50.516618   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:50.516676   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:50.555302   62139 cri.go:89] found id: ""
	I0416 01:01:50.555330   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.555337   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:50.555344   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:50.555388   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:50.594647   62139 cri.go:89] found id: ""
	I0416 01:01:50.594674   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.594682   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:50.594688   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:50.594737   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:50.633401   62139 cri.go:89] found id: ""
	I0416 01:01:50.633428   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.633436   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:50.633442   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:50.633501   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:50.673714   62139 cri.go:89] found id: ""
	I0416 01:01:50.673744   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.673755   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:50.673763   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:50.673811   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:50.710103   62139 cri.go:89] found id: ""
	I0416 01:01:50.710127   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.710134   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:50.710142   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:50.710153   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:50.765121   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:50.765168   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:50.780407   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:50.780436   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:50.855602   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:50.855635   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:50.855663   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:50.937249   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:50.937283   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:49.220483   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:51.724129   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:49.829742   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:52.330579   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:49.621383   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:52.120841   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:53.481261   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:53.495872   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:53.495931   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:53.532710   62139 cri.go:89] found id: ""
	I0416 01:01:53.532738   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.532748   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:53.532756   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:53.532815   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:53.568734   62139 cri.go:89] found id: ""
	I0416 01:01:53.568763   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.568770   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:53.568776   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:53.568841   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:53.608937   62139 cri.go:89] found id: ""
	I0416 01:01:53.608965   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.608976   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:53.608984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:53.609042   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:53.646538   62139 cri.go:89] found id: ""
	I0416 01:01:53.646573   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.646585   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:53.646592   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:53.646657   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:53.687761   62139 cri.go:89] found id: ""
	I0416 01:01:53.687792   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.687801   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:53.687809   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:53.687872   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:53.726126   62139 cri.go:89] found id: ""
	I0416 01:01:53.726161   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.726169   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:53.726174   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:53.726224   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:53.762583   62139 cri.go:89] found id: ""
	I0416 01:01:53.762609   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.762618   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:53.762625   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:53.762695   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:53.803685   62139 cri.go:89] found id: ""
	I0416 01:01:53.803715   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.803726   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:53.803737   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:53.803751   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:53.862215   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:53.862255   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:53.877713   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:53.877743   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:53.953394   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:53.953422   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:53.953438   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:54.044657   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:54.044698   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:56.602100   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:56.616548   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:56.616632   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:56.653765   62139 cri.go:89] found id: ""
	I0416 01:01:56.653794   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.653810   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:56.653817   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:56.653879   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:56.691394   62139 cri.go:89] found id: ""
	I0416 01:01:56.691416   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.691422   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:56.691428   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:56.691475   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:56.728995   62139 cri.go:89] found id: ""
	I0416 01:01:56.729017   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.729024   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:56.729029   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:56.729078   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:56.769119   62139 cri.go:89] found id: ""
	I0416 01:01:56.769184   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.769196   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:56.769204   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:56.769270   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:56.810562   62139 cri.go:89] found id: ""
	I0416 01:01:56.810589   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.810597   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:56.810608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:56.810669   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:56.849367   62139 cri.go:89] found id: ""
	I0416 01:01:56.849392   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.849399   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:56.849405   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:56.849464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:56.887330   62139 cri.go:89] found id: ""
	I0416 01:01:56.887359   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.887370   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:56.887378   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:56.887461   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:56.926636   62139 cri.go:89] found id: ""
	I0416 01:01:56.926664   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.926672   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:56.926682   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:56.926697   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:56.981836   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:56.981875   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:56.996385   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:56.996411   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:57.071026   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:57.071054   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:57.071070   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:54.219668   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:56.221212   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:54.829549   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:56.831452   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:54.619864   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:56.620968   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:57.155430   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:57.155466   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:59.701547   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:59.714465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:59.714526   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:59.759791   62139 cri.go:89] found id: ""
	I0416 01:01:59.759830   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.759841   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:59.759849   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:59.759914   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:59.813303   62139 cri.go:89] found id: ""
	I0416 01:01:59.813334   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.813343   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:59.813353   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:59.813406   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:59.872291   62139 cri.go:89] found id: ""
	I0416 01:01:59.872328   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.872338   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:59.872347   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:59.872423   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:59.910397   62139 cri.go:89] found id: ""
	I0416 01:01:59.910425   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.910437   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:59.910444   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:59.910512   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:59.953656   62139 cri.go:89] found id: ""
	I0416 01:01:59.953685   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.953695   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:59.953703   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:59.953779   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:59.993193   62139 cri.go:89] found id: ""
	I0416 01:01:59.993220   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.993229   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:59.993239   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:59.993298   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:00.030205   62139 cri.go:89] found id: ""
	I0416 01:02:00.030229   62139 logs.go:276] 0 containers: []
	W0416 01:02:00.030237   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:00.030242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:00.030302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:00.068160   62139 cri.go:89] found id: ""
	I0416 01:02:00.068189   62139 logs.go:276] 0 containers: []
	W0416 01:02:00.068199   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:00.068211   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:00.068226   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:00.149383   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:00.149416   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:00.188000   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:00.188025   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:00.240522   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:00.240550   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:00.254189   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:00.254215   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:00.331483   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:58.721272   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:01.220698   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:59.329440   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:01.830408   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:59.122269   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:01.619839   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:02.832656   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:02.846826   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:02.846907   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:02.883397   62139 cri.go:89] found id: ""
	I0416 01:02:02.883428   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.883439   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:02.883446   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:02.883499   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:02.923686   62139 cri.go:89] found id: ""
	I0416 01:02:02.923708   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.923715   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:02.923719   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:02.923770   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:02.964155   62139 cri.go:89] found id: ""
	I0416 01:02:02.964180   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.964188   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:02.964193   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:02.964247   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:03.005357   62139 cri.go:89] found id: ""
	I0416 01:02:03.005386   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.005396   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:03.005403   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:03.005464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:03.047221   62139 cri.go:89] found id: ""
	I0416 01:02:03.047246   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.047257   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:03.047264   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:03.047326   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:03.088737   62139 cri.go:89] found id: ""
	I0416 01:02:03.088767   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.088776   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:03.088784   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:03.088846   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:03.129756   62139 cri.go:89] found id: ""
	I0416 01:02:03.129778   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.129785   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:03.129790   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:03.129837   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:03.169422   62139 cri.go:89] found id: ""
	I0416 01:02:03.169447   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.169459   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:03.169468   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:03.169478   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:03.246485   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:03.246503   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:03.246514   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:03.326498   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:03.326533   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:03.372788   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:03.372817   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:03.428561   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:03.428603   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:05.944274   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:05.957744   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:05.957813   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:05.993348   62139 cri.go:89] found id: ""
	I0416 01:02:05.993400   62139 logs.go:276] 0 containers: []
	W0416 01:02:05.993411   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:05.993430   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:05.993497   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:06.034811   62139 cri.go:89] found id: ""
	I0416 01:02:06.034848   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.034859   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:06.034866   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:06.034953   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:06.079047   62139 cri.go:89] found id: ""
	I0416 01:02:06.079070   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.079078   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:06.079082   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:06.079127   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:06.122494   62139 cri.go:89] found id: ""
	I0416 01:02:06.122513   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.122520   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:06.122525   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:06.122589   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:06.163436   62139 cri.go:89] found id: ""
	I0416 01:02:06.163461   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.163468   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:06.163473   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:06.163534   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:06.205036   62139 cri.go:89] found id: ""
	I0416 01:02:06.205064   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.205072   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:06.205077   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:06.205134   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:06.242056   62139 cri.go:89] found id: ""
	I0416 01:02:06.242084   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.242094   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:06.242107   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:06.242166   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:06.278604   62139 cri.go:89] found id: ""
	I0416 01:02:06.278636   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.278646   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:06.278656   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:06.278671   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:06.334631   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:06.334658   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:06.348199   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:06.348227   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:06.424774   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:06.424793   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:06.424804   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:06.503509   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:06.503542   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:03.221238   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:05.721006   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:04.329267   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:06.329476   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:03.620957   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:06.121348   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:09.046665   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:09.061072   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:09.061173   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:09.097482   62139 cri.go:89] found id: ""
	I0416 01:02:09.097514   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.097524   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:09.097543   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:09.097613   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:09.135124   62139 cri.go:89] found id: ""
	I0416 01:02:09.135157   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.135168   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:09.135175   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:09.135236   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:09.173887   62139 cri.go:89] found id: ""
	I0416 01:02:09.173912   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.173920   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:09.173925   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:09.173983   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:09.209658   62139 cri.go:89] found id: ""
	I0416 01:02:09.209683   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.209691   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:09.209702   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:09.209763   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:09.249149   62139 cri.go:89] found id: ""
	I0416 01:02:09.249200   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.249209   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:09.249214   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:09.249292   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:09.291447   62139 cri.go:89] found id: ""
	I0416 01:02:09.291477   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.291487   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:09.291494   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:09.291553   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:09.329248   62139 cri.go:89] found id: ""
	I0416 01:02:09.329271   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.329281   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:09.329288   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:09.329345   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:09.365585   62139 cri.go:89] found id: ""
	I0416 01:02:09.365613   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.365622   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:09.365632   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:09.365645   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:09.418998   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:09.419031   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:09.433531   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:09.433558   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:09.508543   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:09.508573   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:09.508588   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:09.593889   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:09.593930   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:08.220704   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:10.221232   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.224680   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:08.330281   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:10.828856   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:08.619632   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:10.619780   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.621319   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.139020   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:12.154268   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:12.154349   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:12.192717   62139 cri.go:89] found id: ""
	I0416 01:02:12.192746   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.192758   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:12.192765   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:12.192832   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:12.230633   62139 cri.go:89] found id: ""
	I0416 01:02:12.230662   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.230674   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:12.230681   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:12.230729   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:12.271108   62139 cri.go:89] found id: ""
	I0416 01:02:12.271150   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.271161   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:12.271168   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:12.271233   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:12.310161   62139 cri.go:89] found id: ""
	I0416 01:02:12.310186   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.310194   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:12.310201   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:12.310272   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:12.349638   62139 cri.go:89] found id: ""
	I0416 01:02:12.349668   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.349678   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:12.349686   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:12.349766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:12.391565   62139 cri.go:89] found id: ""
	I0416 01:02:12.391597   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.391607   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:12.391620   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:12.391681   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:12.429142   62139 cri.go:89] found id: ""
	I0416 01:02:12.429186   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.429195   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:12.429200   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:12.429249   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:12.466209   62139 cri.go:89] found id: ""
	I0416 01:02:12.466238   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.466249   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:12.466260   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:12.466277   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:12.551333   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:12.551355   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:12.551367   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:12.634465   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:12.634496   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:12.675198   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:12.675231   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:12.728933   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:12.728962   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:15.243521   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:15.258589   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:15.258657   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:15.301901   62139 cri.go:89] found id: ""
	I0416 01:02:15.301931   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.301943   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:15.301951   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:15.302006   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:15.345932   62139 cri.go:89] found id: ""
	I0416 01:02:15.346011   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.346032   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:15.346043   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:15.346113   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:15.387957   62139 cri.go:89] found id: ""
	I0416 01:02:15.387983   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.387991   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:15.387996   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:15.388044   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:15.424887   62139 cri.go:89] found id: ""
	I0416 01:02:15.424916   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.424927   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:15.424934   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:15.424996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:15.460088   62139 cri.go:89] found id: ""
	I0416 01:02:15.460113   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.460120   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:15.460125   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:15.460172   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:15.495567   62139 cri.go:89] found id: ""
	I0416 01:02:15.495597   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.495607   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:15.495615   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:15.495692   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:15.533901   62139 cri.go:89] found id: ""
	I0416 01:02:15.533931   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.533940   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:15.533946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:15.533996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:15.576665   62139 cri.go:89] found id: ""
	I0416 01:02:15.576692   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.576702   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:15.576712   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:15.576728   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:15.626933   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:15.626961   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:15.681627   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:15.681656   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:15.695572   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:15.695608   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:15.768910   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:15.768934   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:15.768945   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:14.720472   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:16.722418   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.830086   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:14.830540   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:17.329838   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:15.120394   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:17.120523   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:18.349776   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:18.363499   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:18.363568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:18.404210   62139 cri.go:89] found id: ""
	I0416 01:02:18.404234   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.404241   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:18.404246   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:18.404304   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:18.444610   62139 cri.go:89] found id: ""
	I0416 01:02:18.444641   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.444651   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:18.444658   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:18.444722   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:18.483134   62139 cri.go:89] found id: ""
	I0416 01:02:18.483160   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.483168   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:18.483173   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:18.483220   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:18.522120   62139 cri.go:89] found id: ""
	I0416 01:02:18.522144   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.522156   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:18.522161   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:18.522205   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:18.566293   62139 cri.go:89] found id: ""
	I0416 01:02:18.566319   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.566327   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:18.566332   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:18.566391   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:18.604000   62139 cri.go:89] found id: ""
	I0416 01:02:18.604028   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.604036   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:18.604042   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:18.604089   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:18.641967   62139 cri.go:89] found id: ""
	I0416 01:02:18.641999   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.642009   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:18.642016   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:18.642080   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:18.683494   62139 cri.go:89] found id: ""
	I0416 01:02:18.683533   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.683544   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:18.683555   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:18.683570   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:18.761674   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:18.761699   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:18.761714   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:18.849959   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:18.849995   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:18.895534   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:18.895570   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:18.949287   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:18.949320   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:21.464393   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:21.479019   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:21.479087   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:21.516262   62139 cri.go:89] found id: ""
	I0416 01:02:21.516303   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.516313   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:21.516323   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:21.516385   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:21.554279   62139 cri.go:89] found id: ""
	I0416 01:02:21.554315   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.554327   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:21.554334   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:21.554393   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:21.590889   62139 cri.go:89] found id: ""
	I0416 01:02:21.590918   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.590928   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:21.590935   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:21.590996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:21.629925   62139 cri.go:89] found id: ""
	I0416 01:02:21.629955   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.629965   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:21.629972   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:21.630032   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:21.667947   62139 cri.go:89] found id: ""
	I0416 01:02:21.667975   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.667983   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:21.667988   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:21.668045   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:21.706275   62139 cri.go:89] found id: ""
	I0416 01:02:21.706308   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.706318   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:21.706326   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:21.706392   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:21.748077   62139 cri.go:89] found id: ""
	I0416 01:02:21.748106   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.748117   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:21.748123   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:21.748170   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:21.785441   62139 cri.go:89] found id: ""
	I0416 01:02:21.785467   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.785477   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:21.785488   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:21.785510   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:21.824702   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:21.824735   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:21.882780   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:21.882810   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:21.897211   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:21.897236   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:21.971882   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:21.971903   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:21.971915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:19.220913   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:21.721219   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:19.330086   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:21.836759   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:19.620521   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:21.621229   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:24.550749   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:24.564951   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:24.565024   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:24.605025   62139 cri.go:89] found id: ""
	I0416 01:02:24.605055   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.605063   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:24.605068   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:24.605142   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:24.640727   62139 cri.go:89] found id: ""
	I0416 01:02:24.640757   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.640764   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:24.640769   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:24.640822   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:24.678031   62139 cri.go:89] found id: ""
	I0416 01:02:24.678060   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.678068   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:24.678074   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:24.678125   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:24.714854   62139 cri.go:89] found id: ""
	I0416 01:02:24.714896   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.714907   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:24.714914   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:24.714981   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:24.752129   62139 cri.go:89] found id: ""
	I0416 01:02:24.752158   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.752168   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:24.752177   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:24.752243   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:24.788507   62139 cri.go:89] found id: ""
	I0416 01:02:24.788541   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.788551   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:24.788557   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:24.788617   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:24.828379   62139 cri.go:89] found id: ""
	I0416 01:02:24.828409   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.828419   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:24.828427   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:24.828486   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:24.865676   62139 cri.go:89] found id: ""
	I0416 01:02:24.865707   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.865717   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:24.865725   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:24.865736   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:24.941057   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:24.941079   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:24.941091   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:25.025937   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:25.025979   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:25.065828   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:25.065871   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:25.128004   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:25.128039   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:24.221435   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:26.720181   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:24.329677   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:26.329901   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:24.119781   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:26.120316   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:27.643201   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:27.658601   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:27.658660   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:27.700627   62139 cri.go:89] found id: ""
	I0416 01:02:27.700650   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.700657   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:27.700662   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:27.700718   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:27.734929   62139 cri.go:89] found id: ""
	I0416 01:02:27.734957   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.734966   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:27.734975   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:27.735046   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:27.772412   62139 cri.go:89] found id: ""
	I0416 01:02:27.772440   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.772448   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:27.772454   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:27.772514   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:27.809436   62139 cri.go:89] found id: ""
	I0416 01:02:27.809459   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.809466   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:27.809471   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:27.809518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:27.845717   62139 cri.go:89] found id: ""
	I0416 01:02:27.845746   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.845756   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:27.845764   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:27.845825   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:27.887224   62139 cri.go:89] found id: ""
	I0416 01:02:27.887250   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.887260   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:27.887267   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:27.887334   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:27.920945   62139 cri.go:89] found id: ""
	I0416 01:02:27.920974   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.920984   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:27.920992   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:27.921066   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:27.960933   62139 cri.go:89] found id: ""
	I0416 01:02:27.960959   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.960966   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:27.960974   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:27.960985   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:28.013003   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:28.013033   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:28.026599   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:28.026626   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:28.117200   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:28.117226   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:28.117240   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:28.198003   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:28.198036   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:30.741379   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:30.757102   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:30.757199   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:30.798038   62139 cri.go:89] found id: ""
	I0416 01:02:30.798068   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.798075   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:30.798080   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:30.798137   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:30.844840   62139 cri.go:89] found id: ""
	I0416 01:02:30.844862   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.844871   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:30.844877   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:30.844944   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:30.883816   62139 cri.go:89] found id: ""
	I0416 01:02:30.883841   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.883849   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:30.883855   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:30.883903   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:30.919353   62139 cri.go:89] found id: ""
	I0416 01:02:30.919380   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.919389   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:30.919396   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:30.919457   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:30.957036   62139 cri.go:89] found id: ""
	I0416 01:02:30.957061   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.957069   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:30.957084   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:30.957143   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:30.993179   62139 cri.go:89] found id: ""
	I0416 01:02:30.993211   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.993220   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:30.993228   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:30.993315   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:31.032634   62139 cri.go:89] found id: ""
	I0416 01:02:31.032661   62139 logs.go:276] 0 containers: []
	W0416 01:02:31.032670   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:31.032684   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:31.032753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:31.069345   62139 cri.go:89] found id: ""
	I0416 01:02:31.069373   62139 logs.go:276] 0 containers: []
	W0416 01:02:31.069382   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:31.069392   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:31.069408   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:31.123989   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:31.124017   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:31.140998   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:31.141032   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:31.217496   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:31.218063   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:31.218098   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:31.296811   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:31.296858   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:28.720502   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:30.720709   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:28.329978   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:30.829406   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:28.121200   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:30.620659   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:33.842516   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:33.872440   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:33.872518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:33.909287   62139 cri.go:89] found id: ""
	I0416 01:02:33.909314   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.909324   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:33.909329   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:33.909388   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:33.947531   62139 cri.go:89] found id: ""
	I0416 01:02:33.947566   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.947576   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:33.947584   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:33.947642   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:33.990084   62139 cri.go:89] found id: ""
	I0416 01:02:33.990118   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.990129   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:33.990136   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:33.990200   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:34.024121   62139 cri.go:89] found id: ""
	I0416 01:02:34.024151   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.024159   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:34.024165   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:34.024218   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:34.061075   62139 cri.go:89] found id: ""
	I0416 01:02:34.061104   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.061111   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:34.061116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:34.061179   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:34.097887   62139 cri.go:89] found id: ""
	I0416 01:02:34.097928   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.097938   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:34.097946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:34.098007   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:34.135541   62139 cri.go:89] found id: ""
	I0416 01:02:34.135567   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.135577   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:34.135585   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:34.135637   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:34.170884   62139 cri.go:89] found id: ""
	I0416 01:02:34.170910   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.170920   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:34.170931   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:34.170946   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:34.223465   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:34.223494   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:34.238898   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:34.238929   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:34.316916   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:34.316946   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:34.316962   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:34.401564   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:34.401600   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:36.945789   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:36.959707   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:36.959774   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:36.994463   62139 cri.go:89] found id: ""
	I0416 01:02:36.994497   62139 logs.go:276] 0 containers: []
	W0416 01:02:36.994508   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:36.994515   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:36.994579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:37.028847   62139 cri.go:89] found id: ""
	I0416 01:02:37.028877   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.028887   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:37.028893   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:37.028954   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:37.061841   62139 cri.go:89] found id: ""
	I0416 01:02:37.061872   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.061882   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:37.061889   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:37.061954   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:37.098460   62139 cri.go:89] found id: ""
	I0416 01:02:37.098485   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.098495   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:37.098502   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:37.098569   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:33.220794   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:35.221650   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:37.222563   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:32.829517   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:34.829762   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:36.831773   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:33.121842   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:35.620647   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:37.620795   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:37.133016   62139 cri.go:89] found id: ""
	I0416 01:02:37.133044   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.133053   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:37.133059   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:37.133122   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:37.170252   62139 cri.go:89] found id: ""
	I0416 01:02:37.170276   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.170286   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:37.170293   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:37.170354   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:37.206114   62139 cri.go:89] found id: ""
	I0416 01:02:37.206141   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.206148   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:37.206153   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:37.206208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:37.241353   62139 cri.go:89] found id: ""
	I0416 01:02:37.241383   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.241395   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:37.241405   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:37.241429   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:37.293452   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:37.293483   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:37.309885   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:37.309926   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:37.385455   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:37.385481   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:37.385496   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:37.463064   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:37.463101   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:40.008717   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:40.022249   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:40.022327   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:40.064444   62139 cri.go:89] found id: ""
	I0416 01:02:40.064479   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.064490   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:40.064497   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:40.064545   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:40.100326   62139 cri.go:89] found id: ""
	I0416 01:02:40.100353   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.100361   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:40.100366   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:40.100413   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:40.138818   62139 cri.go:89] found id: ""
	I0416 01:02:40.138857   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.138869   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:40.138878   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:40.138928   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:40.184203   62139 cri.go:89] found id: ""
	I0416 01:02:40.184234   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.184244   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:40.184252   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:40.184311   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:40.221968   62139 cri.go:89] found id: ""
	I0416 01:02:40.221991   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.221998   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:40.222007   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:40.222088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:40.265621   62139 cri.go:89] found id: ""
	I0416 01:02:40.265643   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.265650   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:40.265657   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:40.265723   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:40.314121   62139 cri.go:89] found id: ""
	I0416 01:02:40.314152   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.314163   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:40.314170   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:40.314229   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:40.359788   62139 cri.go:89] found id: ""
	I0416 01:02:40.359825   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.359836   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:40.359849   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:40.359863   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:40.431678   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:40.431718   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:40.449847   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:40.449877   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:40.524271   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:40.524297   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:40.524309   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:40.601398   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:40.601433   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:39.720606   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:41.721437   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:39.330974   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:41.830050   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:40.120785   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:42.123996   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:43.145431   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:43.160269   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:43.160338   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:43.196603   62139 cri.go:89] found id: ""
	I0416 01:02:43.196637   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.196648   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:43.196655   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:43.196716   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:43.235863   62139 cri.go:89] found id: ""
	I0416 01:02:43.235893   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.235905   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:43.235911   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:43.235971   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:43.271408   62139 cri.go:89] found id: ""
	I0416 01:02:43.271437   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.271444   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:43.271450   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:43.271512   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:43.310931   62139 cri.go:89] found id: ""
	I0416 01:02:43.310958   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.310965   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:43.310971   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:43.311032   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:43.347472   62139 cri.go:89] found id: ""
	I0416 01:02:43.347502   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.347512   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:43.347520   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:43.347581   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:43.387326   62139 cri.go:89] found id: ""
	I0416 01:02:43.387361   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.387372   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:43.387429   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:43.387506   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:43.425099   62139 cri.go:89] found id: ""
	I0416 01:02:43.425122   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.425130   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:43.425141   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:43.425208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:43.461364   62139 cri.go:89] found id: ""
	I0416 01:02:43.461397   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.461408   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:43.461419   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:43.461434   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:43.514520   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:43.514556   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:43.528740   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:43.528777   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:43.599010   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:43.599035   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:43.599051   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:43.682913   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:43.682959   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:46.231398   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:46.260247   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:46.260338   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:46.304498   62139 cri.go:89] found id: ""
	I0416 01:02:46.304521   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.304528   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:46.304534   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:46.304600   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:46.364055   62139 cri.go:89] found id: ""
	I0416 01:02:46.364081   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.364090   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:46.364098   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:46.364167   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:46.412395   62139 cri.go:89] found id: ""
	I0416 01:02:46.412437   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.412475   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:46.412510   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:46.412584   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:46.453669   62139 cri.go:89] found id: ""
	I0416 01:02:46.453698   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.453709   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:46.453716   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:46.453766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:46.490667   62139 cri.go:89] found id: ""
	I0416 01:02:46.490699   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.490709   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:46.490715   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:46.490766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:46.529405   62139 cri.go:89] found id: ""
	I0416 01:02:46.529443   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.529460   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:46.529467   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:46.529527   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:46.565359   62139 cri.go:89] found id: ""
	I0416 01:02:46.565384   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.565391   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:46.565396   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:46.565451   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:46.609381   62139 cri.go:89] found id: ""
	I0416 01:02:46.609406   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.609413   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:46.609421   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:46.609432   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:46.663080   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:46.663112   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:46.677303   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:46.677338   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:46.750134   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:46.750163   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:46.750175   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:46.829395   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:46.829434   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:43.721477   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:46.220462   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:43.831829   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:46.329333   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:44.619712   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:46.621271   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:49.374356   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:49.390674   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:49.390753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:49.427968   62139 cri.go:89] found id: ""
	I0416 01:02:49.427993   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.428000   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:49.428005   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:49.428058   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:49.461821   62139 cri.go:89] found id: ""
	I0416 01:02:49.461850   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.461857   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:49.461863   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:49.461918   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:49.496305   62139 cri.go:89] found id: ""
	I0416 01:02:49.496356   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.496364   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:49.496369   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:49.496429   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:49.536096   62139 cri.go:89] found id: ""
	I0416 01:02:49.536122   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.536129   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:49.536134   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:49.536194   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:49.572078   62139 cri.go:89] found id: ""
	I0416 01:02:49.572106   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.572115   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:49.572122   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:49.572181   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:49.607803   62139 cri.go:89] found id: ""
	I0416 01:02:49.607835   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.607847   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:49.607861   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:49.607915   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:49.651245   62139 cri.go:89] found id: ""
	I0416 01:02:49.651272   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.651280   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:49.651285   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:49.651332   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:49.693587   62139 cri.go:89] found id: ""
	I0416 01:02:49.693612   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.693622   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:49.693632   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:49.693646   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:49.750003   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:49.750032   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:49.764447   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:49.764472   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:49.844739   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:49.844764   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:49.844780   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:49.924260   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:49.924294   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:48.220753   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:50.220986   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:48.330946   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:50.829409   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:49.120516   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:51.619516   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:52.467399   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:52.481656   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:52.481729   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:52.518506   62139 cri.go:89] found id: ""
	I0416 01:02:52.518531   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.518537   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:52.518544   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:52.518599   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:52.554799   62139 cri.go:89] found id: ""
	I0416 01:02:52.554820   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.554827   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:52.554832   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:52.554888   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:52.597236   62139 cri.go:89] found id: ""
	I0416 01:02:52.597265   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.597272   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:52.597278   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:52.597335   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:52.635544   62139 cri.go:89] found id: ""
	I0416 01:02:52.635567   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.635578   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:52.635585   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:52.635639   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:52.672715   62139 cri.go:89] found id: ""
	I0416 01:02:52.672739   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.672746   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:52.672751   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:52.672808   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:52.711600   62139 cri.go:89] found id: ""
	I0416 01:02:52.711631   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.711640   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:52.711648   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:52.711718   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:52.750372   62139 cri.go:89] found id: ""
	I0416 01:02:52.750405   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.750416   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:52.750423   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:52.750486   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:52.786651   62139 cri.go:89] found id: ""
	I0416 01:02:52.786678   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.786688   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:52.786698   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:52.786712   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:52.840262   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:52.840296   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:52.854734   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:52.854762   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:52.931182   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:52.931211   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:52.931226   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:53.007023   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:53.007061   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:55.548305   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:55.562483   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:55.562562   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:55.599480   62139 cri.go:89] found id: ""
	I0416 01:02:55.599504   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.599511   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:55.599517   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:55.599573   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:55.636832   62139 cri.go:89] found id: ""
	I0416 01:02:55.636862   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.636873   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:55.636879   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:55.636940   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:55.676211   62139 cri.go:89] found id: ""
	I0416 01:02:55.676240   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.676250   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:55.676256   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:55.676318   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:55.713498   62139 cri.go:89] found id: ""
	I0416 01:02:55.713527   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.713537   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:55.713544   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:55.713604   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:55.754239   62139 cri.go:89] found id: ""
	I0416 01:02:55.754276   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.754284   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:55.754301   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:55.754355   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:55.792073   62139 cri.go:89] found id: ""
	I0416 01:02:55.792106   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.792117   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:55.792125   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:55.792191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:55.829635   62139 cri.go:89] found id: ""
	I0416 01:02:55.829665   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.829676   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:55.829683   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:55.829742   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:55.876417   62139 cri.go:89] found id: ""
	I0416 01:02:55.876443   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.876450   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:55.876458   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:55.876471   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:55.926670   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:55.926707   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:55.941660   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:55.941696   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:56.018776   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:56.018806   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:56.018820   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:56.097335   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:56.097378   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:52.720703   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:55.221614   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:52.830970   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:55.329886   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:53.620969   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:56.122135   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:58.642188   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:58.655537   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:58.655605   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:58.692091   62139 cri.go:89] found id: ""
	I0416 01:02:58.692116   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.692124   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:58.692129   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:58.692191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:58.729434   62139 cri.go:89] found id: ""
	I0416 01:02:58.729461   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.729472   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:58.729491   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:58.729568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:58.765879   62139 cri.go:89] found id: ""
	I0416 01:02:58.765907   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.765916   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:58.765924   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:58.765987   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:58.802285   62139 cri.go:89] found id: ""
	I0416 01:02:58.802323   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.802334   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:58.802342   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:58.802399   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:58.841357   62139 cri.go:89] found id: ""
	I0416 01:02:58.841385   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.841396   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:58.841403   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:58.841464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:58.876982   62139 cri.go:89] found id: ""
	I0416 01:02:58.877022   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.877032   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:58.877040   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:58.877108   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:58.915563   62139 cri.go:89] found id: ""
	I0416 01:02:58.915596   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.915607   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:58.915614   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:58.915683   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:58.951268   62139 cri.go:89] found id: ""
	I0416 01:02:58.951303   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.951313   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:58.951324   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:58.951341   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:59.004673   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:59.004710   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:59.019393   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:59.019423   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:59.091587   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:59.091612   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:59.091632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:59.169623   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:59.169655   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:01.710597   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:01.724394   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:01.724463   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:01.761577   62139 cri.go:89] found id: ""
	I0416 01:03:01.761605   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.761616   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:01.761624   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:01.761684   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:01.797467   62139 cri.go:89] found id: ""
	I0416 01:03:01.797498   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.797508   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:01.797515   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:01.797582   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:01.839910   62139 cri.go:89] found id: ""
	I0416 01:03:01.839940   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.839950   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:01.839958   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:01.840019   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:01.879572   62139 cri.go:89] found id: ""
	I0416 01:03:01.879599   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.879611   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:01.879617   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:01.879664   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:01.920190   62139 cri.go:89] found id: ""
	I0416 01:03:01.920222   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.920234   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:01.920242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:01.920300   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:01.957389   62139 cri.go:89] found id: ""
	I0416 01:03:01.957418   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.957428   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:01.957436   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:01.957507   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:01.998730   62139 cri.go:89] found id: ""
	I0416 01:03:01.998754   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.998762   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:01.998767   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:01.998812   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:02.036062   62139 cri.go:89] found id: ""
	I0416 01:03:02.036094   62139 logs.go:276] 0 containers: []
	W0416 01:03:02.036103   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:02.036112   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:02.036125   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:02.089109   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:02.089149   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:57.720792   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:00.219899   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:02.220048   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:57.832016   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:00.328867   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:02.330238   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:58.620416   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:01.121496   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:02.103312   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:02.103342   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:02.174034   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:02.174056   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:02.174069   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:02.249526   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:02.249555   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:04.795314   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:04.808294   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:04.808367   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:04.848795   62139 cri.go:89] found id: ""
	I0416 01:03:04.848825   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.848849   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:04.848857   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:04.848928   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:04.886442   62139 cri.go:89] found id: ""
	I0416 01:03:04.886477   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.886488   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:04.886502   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:04.886572   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:04.929183   62139 cri.go:89] found id: ""
	I0416 01:03:04.929215   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.929226   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:04.929234   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:04.929297   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:04.965134   62139 cri.go:89] found id: ""
	I0416 01:03:04.965172   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.965184   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:04.965191   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:04.965247   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:05.001346   62139 cri.go:89] found id: ""
	I0416 01:03:05.001373   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.001381   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:05.001387   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:05.001434   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:05.039181   62139 cri.go:89] found id: ""
	I0416 01:03:05.039210   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.039219   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:05.039224   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:05.039289   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:05.073451   62139 cri.go:89] found id: ""
	I0416 01:03:05.073479   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.073487   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:05.073494   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:05.073555   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:05.108466   62139 cri.go:89] found id: ""
	I0416 01:03:05.108495   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.108510   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:05.108520   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:05.108537   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:05.162725   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:05.162765   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:05.178152   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:05.178183   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:05.255122   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:05.255147   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:05.255161   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:05.331274   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:05.331309   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:04.220320   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:06.220475   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:04.331381   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:06.830143   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:03.620275   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:06.121293   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:07.882980   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:07.896311   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:07.896372   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:07.934632   62139 cri.go:89] found id: ""
	I0416 01:03:07.934661   62139 logs.go:276] 0 containers: []
	W0416 01:03:07.934671   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:07.934677   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:07.934745   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:07.971463   62139 cri.go:89] found id: ""
	I0416 01:03:07.971495   62139 logs.go:276] 0 containers: []
	W0416 01:03:07.971511   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:07.971518   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:07.971581   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:08.006808   62139 cri.go:89] found id: ""
	I0416 01:03:08.006839   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.006847   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:08.006852   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:08.006912   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:08.043051   62139 cri.go:89] found id: ""
	I0416 01:03:08.043082   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.043089   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:08.043095   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:08.043155   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:08.078602   62139 cri.go:89] found id: ""
	I0416 01:03:08.078638   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.078647   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:08.078655   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:08.078724   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:08.115264   62139 cri.go:89] found id: ""
	I0416 01:03:08.115293   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.115303   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:08.115311   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:08.115378   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:08.152782   62139 cri.go:89] found id: ""
	I0416 01:03:08.152814   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.152821   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:08.152826   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:08.152875   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:08.193484   62139 cri.go:89] found id: ""
	I0416 01:03:08.193506   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.193513   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:08.193522   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:08.193532   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:08.248796   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:08.248831   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:08.266054   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:08.266083   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:08.343470   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:08.343501   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:08.343515   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:08.430335   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:08.430383   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:10.972540   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:10.986911   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:10.986984   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:11.024905   62139 cri.go:89] found id: ""
	I0416 01:03:11.024939   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.024951   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:11.024958   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:11.025011   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:11.058629   62139 cri.go:89] found id: ""
	I0416 01:03:11.058654   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.058662   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:11.058667   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:11.058721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:11.093277   62139 cri.go:89] found id: ""
	I0416 01:03:11.093308   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.093317   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:11.093325   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:11.093386   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:11.131883   62139 cri.go:89] found id: ""
	I0416 01:03:11.131912   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.131924   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:11.131934   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:11.132004   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:11.175142   62139 cri.go:89] found id: ""
	I0416 01:03:11.175169   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.175179   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:11.175186   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:11.175236   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:11.209985   62139 cri.go:89] found id: ""
	I0416 01:03:11.210020   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.210031   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:11.210039   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:11.210110   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:11.246086   62139 cri.go:89] found id: ""
	I0416 01:03:11.246119   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.246129   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:11.246137   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:11.246199   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:11.286979   62139 cri.go:89] found id: ""
	I0416 01:03:11.287007   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.287019   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:11.287037   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:11.287051   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:11.364522   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:11.364557   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:11.410343   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:11.410375   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:11.459671   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:11.459703   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:11.476163   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:11.476193   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:11.549544   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:08.220881   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:10.720607   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:09.329882   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:11.330570   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:08.620817   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:11.120789   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:14.050433   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:14.065375   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:14.065431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:14.105548   62139 cri.go:89] found id: ""
	I0416 01:03:14.105571   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.105579   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:14.105583   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:14.105644   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:14.146891   62139 cri.go:89] found id: ""
	I0416 01:03:14.146915   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.146922   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:14.146927   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:14.146972   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:14.183905   62139 cri.go:89] found id: ""
	I0416 01:03:14.183937   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.183948   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:14.183954   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:14.184002   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:14.219878   62139 cri.go:89] found id: ""
	I0416 01:03:14.219905   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.219915   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:14.219922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:14.219978   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:14.256284   62139 cri.go:89] found id: ""
	I0416 01:03:14.256310   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.256317   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:14.256323   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:14.256381   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:14.295932   62139 cri.go:89] found id: ""
	I0416 01:03:14.295958   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.295966   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:14.295971   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:14.296025   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:14.333202   62139 cri.go:89] found id: ""
	I0416 01:03:14.333226   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.333235   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:14.333242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:14.333302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:14.370034   62139 cri.go:89] found id: ""
	I0416 01:03:14.370059   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.370066   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:14.370074   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:14.370092   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:14.424626   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:14.424669   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:14.441842   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:14.441872   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:14.515899   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:14.515926   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:14.515944   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:14.599956   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:14.599991   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:12.720896   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:15.220260   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:13.829944   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:16.328971   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:13.621084   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:16.120767   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:17.157610   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:17.171737   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:17.171800   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:17.214327   62139 cri.go:89] found id: ""
	I0416 01:03:17.214354   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.214364   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:17.214371   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:17.214433   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:17.255896   62139 cri.go:89] found id: ""
	I0416 01:03:17.255924   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.255939   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:17.255946   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:17.256005   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:17.298470   62139 cri.go:89] found id: ""
	I0416 01:03:17.298498   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.298512   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:17.298520   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:17.298580   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:17.338810   62139 cri.go:89] found id: ""
	I0416 01:03:17.338834   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.338842   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:17.338847   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:17.338899   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:17.375980   62139 cri.go:89] found id: ""
	I0416 01:03:17.376012   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.376019   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:17.376024   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:17.376076   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:17.411374   62139 cri.go:89] found id: ""
	I0416 01:03:17.411400   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.411408   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:17.411413   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:17.411463   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:17.452916   62139 cri.go:89] found id: ""
	I0416 01:03:17.452951   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.452962   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:17.452969   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:17.453037   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:17.492459   62139 cri.go:89] found id: ""
	I0416 01:03:17.492489   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.492500   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:17.492512   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:17.492527   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:17.541780   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:17.541814   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:17.558831   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:17.558867   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:17.635332   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:17.635351   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:17.635362   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:17.715778   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:17.715809   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:20.260621   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:20.274721   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:20.274791   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:20.311965   62139 cri.go:89] found id: ""
	I0416 01:03:20.311991   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.312002   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:20.312009   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:20.312069   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:20.350316   62139 cri.go:89] found id: ""
	I0416 01:03:20.350346   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.350356   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:20.350363   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:20.350414   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:20.404666   62139 cri.go:89] found id: ""
	I0416 01:03:20.404692   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.404700   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:20.404705   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:20.404753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:20.441223   62139 cri.go:89] found id: ""
	I0416 01:03:20.441254   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.441267   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:20.441275   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:20.441340   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:20.480535   62139 cri.go:89] found id: ""
	I0416 01:03:20.480596   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.480606   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:20.480613   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:20.480680   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:20.517520   62139 cri.go:89] found id: ""
	I0416 01:03:20.517543   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.517550   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:20.517556   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:20.517614   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:20.556067   62139 cri.go:89] found id: ""
	I0416 01:03:20.556097   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.556107   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:20.556114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:20.556177   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:20.594901   62139 cri.go:89] found id: ""
	I0416 01:03:20.594932   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.594939   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:20.594947   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:20.594958   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:20.673759   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:20.673795   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:20.721407   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:20.721443   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:20.772957   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:20.772989   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:20.787902   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:20.787932   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:20.863445   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:17.721415   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:20.221042   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:18.329421   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:20.329949   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:22.330009   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:18.122678   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:20.621127   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:22.621692   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:23.363637   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:23.377916   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:23.377991   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:23.415642   62139 cri.go:89] found id: ""
	I0416 01:03:23.415671   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.415679   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:23.415685   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:23.415732   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:23.452788   62139 cri.go:89] found id: ""
	I0416 01:03:23.452812   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.452819   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:23.452829   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:23.452878   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:23.488758   62139 cri.go:89] found id: ""
	I0416 01:03:23.488785   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.488794   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:23.488801   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:23.488862   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:23.526542   62139 cri.go:89] found id: ""
	I0416 01:03:23.526574   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.526584   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:23.526592   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:23.526661   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:23.562481   62139 cri.go:89] found id: ""
	I0416 01:03:23.562505   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.562512   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:23.562518   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:23.562579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:23.599119   62139 cri.go:89] found id: ""
	I0416 01:03:23.599145   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.599155   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:23.599162   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:23.599241   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:23.642445   62139 cri.go:89] found id: ""
	I0416 01:03:23.642474   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.642485   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:23.642492   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:23.642557   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:23.678091   62139 cri.go:89] found id: ""
	I0416 01:03:23.678113   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.678121   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:23.678129   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:23.678140   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:23.731668   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:23.731703   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:23.746413   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:23.746444   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:23.821885   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:23.821908   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:23.821923   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:23.901836   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:23.901872   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:26.444935   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:26.459240   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:26.459308   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:26.499208   62139 cri.go:89] found id: ""
	I0416 01:03:26.499237   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.499249   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:26.499256   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:26.499318   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:26.536220   62139 cri.go:89] found id: ""
	I0416 01:03:26.536258   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.536270   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:26.536277   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:26.536342   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:26.576217   62139 cri.go:89] found id: ""
	I0416 01:03:26.576241   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.576249   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:26.576254   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:26.576314   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:26.612343   62139 cri.go:89] found id: ""
	I0416 01:03:26.612369   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.612378   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:26.612385   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:26.612448   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:26.651323   62139 cri.go:89] found id: ""
	I0416 01:03:26.651353   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.651365   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:26.651384   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:26.651453   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:26.688844   62139 cri.go:89] found id: ""
	I0416 01:03:26.688874   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.688885   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:26.688891   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:26.688969   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:26.724362   62139 cri.go:89] found id: ""
	I0416 01:03:26.724387   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.724395   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:26.724401   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:26.724455   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:26.767766   62139 cri.go:89] found id: ""
	I0416 01:03:26.767795   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.767806   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:26.767816   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:26.767837   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:26.788269   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:26.788297   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:26.884802   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:26.884822   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:26.884834   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:26.964007   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:26.964044   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:27.003719   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:27.003745   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:22.720420   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:24.720865   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:26.721369   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:24.828766   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:26.830222   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:25.119674   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:27.620689   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:29.563218   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:29.579014   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:29.579078   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:29.620739   62139 cri.go:89] found id: ""
	I0416 01:03:29.620769   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.620780   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:29.620787   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:29.620850   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:29.658165   62139 cri.go:89] found id: ""
	I0416 01:03:29.658192   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.658199   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:29.658205   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:29.658252   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:29.693893   62139 cri.go:89] found id: ""
	I0416 01:03:29.693921   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.693929   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:29.693935   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:29.693985   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:29.737808   62139 cri.go:89] found id: ""
	I0416 01:03:29.737836   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.737846   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:29.737851   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:29.737910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:29.777382   62139 cri.go:89] found id: ""
	I0416 01:03:29.777408   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.777416   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:29.777422   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:29.777473   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:29.815633   62139 cri.go:89] found id: ""
	I0416 01:03:29.815659   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.815668   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:29.815682   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:29.815743   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:29.858790   62139 cri.go:89] found id: ""
	I0416 01:03:29.858820   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.858831   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:29.858839   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:29.858899   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:29.897085   62139 cri.go:89] found id: ""
	I0416 01:03:29.897120   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.897131   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:29.897142   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:29.897169   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:29.951231   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:29.951266   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:29.965539   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:29.965565   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:30.045138   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:30.045170   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:30.045186   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:30.120575   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:30.120606   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:29.220073   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:31.221145   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:29.328625   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:31.329903   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:29.621401   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:32.120604   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:32.662210   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:32.675833   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:32.675903   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:32.712104   62139 cri.go:89] found id: ""
	I0416 01:03:32.712129   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.712136   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:32.712141   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:32.712198   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:32.749617   62139 cri.go:89] found id: ""
	I0416 01:03:32.749644   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.749652   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:32.749658   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:32.749723   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:32.785069   62139 cri.go:89] found id: ""
	I0416 01:03:32.785100   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.785110   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:32.785116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:32.785191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:32.825871   62139 cri.go:89] found id: ""
	I0416 01:03:32.825912   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.825922   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:32.825928   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:32.826008   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:32.868294   62139 cri.go:89] found id: ""
	I0416 01:03:32.868321   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.868328   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:32.868334   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:32.868401   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:32.907764   62139 cri.go:89] found id: ""
	I0416 01:03:32.907789   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.907796   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:32.907802   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:32.907870   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:32.946112   62139 cri.go:89] found id: ""
	I0416 01:03:32.946137   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.946144   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:32.946155   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:32.946215   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:32.985343   62139 cri.go:89] found id: ""
	I0416 01:03:32.985374   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.985385   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:32.985395   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:32.985415   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:33.063117   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:33.063154   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:33.113739   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:33.113773   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:33.163466   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:33.163508   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:33.178368   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:33.178397   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:33.259509   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:35.760004   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:35.774161   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:35.774237   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:35.812551   62139 cri.go:89] found id: ""
	I0416 01:03:35.812580   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.812589   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:35.812594   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:35.812642   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:35.853134   62139 cri.go:89] found id: ""
	I0416 01:03:35.853177   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.853187   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:35.853195   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:35.853255   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:35.894210   62139 cri.go:89] found id: ""
	I0416 01:03:35.894246   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.894254   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:35.894259   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:35.894330   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:35.928986   62139 cri.go:89] found id: ""
	I0416 01:03:35.929010   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.929019   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:35.929027   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:35.929090   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:35.970688   62139 cri.go:89] found id: ""
	I0416 01:03:35.970712   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.970719   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:35.970725   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:35.970783   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:36.005744   62139 cri.go:89] found id: ""
	I0416 01:03:36.005771   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.005778   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:36.005783   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:36.005829   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:36.044932   62139 cri.go:89] found id: ""
	I0416 01:03:36.044966   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.044977   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:36.044984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:36.045051   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:36.080488   62139 cri.go:89] found id: ""
	I0416 01:03:36.080516   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.080527   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:36.080538   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:36.080552   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:36.132956   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:36.133000   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:36.147070   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:36.147097   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:36.226640   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:36.226670   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:36.226684   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:36.307205   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:36.307249   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:33.221952   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:35.720745   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:33.828768   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:35.830452   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:34.120695   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:36.619511   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:38.849685   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:38.863817   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:38.863897   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:38.902418   62139 cri.go:89] found id: ""
	I0416 01:03:38.902445   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.902455   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:38.902462   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:38.902533   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:38.937811   62139 cri.go:89] found id: ""
	I0416 01:03:38.937838   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.937845   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:38.937850   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:38.937900   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:38.972380   62139 cri.go:89] found id: ""
	I0416 01:03:38.972403   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.972411   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:38.972416   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:38.972466   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:39.007572   62139 cri.go:89] found id: ""
	I0416 01:03:39.007595   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.007603   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:39.007608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:39.007651   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:39.049355   62139 cri.go:89] found id: ""
	I0416 01:03:39.049382   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.049391   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:39.049398   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:39.049459   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:39.084535   62139 cri.go:89] found id: ""
	I0416 01:03:39.084565   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.084574   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:39.084581   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:39.084645   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:39.125027   62139 cri.go:89] found id: ""
	I0416 01:03:39.125055   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.125073   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:39.125080   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:39.125136   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:39.164506   62139 cri.go:89] found id: ""
	I0416 01:03:39.164537   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.164547   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:39.164557   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:39.164573   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:39.203447   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:39.203483   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:39.259087   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:39.259122   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:39.273611   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:39.273637   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:39.352372   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:39.352392   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:39.352407   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:41.938575   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:41.952937   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:41.953019   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:41.990771   62139 cri.go:89] found id: ""
	I0416 01:03:41.990802   62139 logs.go:276] 0 containers: []
	W0416 01:03:41.990811   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:41.990819   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:41.990881   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:42.027338   62139 cri.go:89] found id: ""
	I0416 01:03:42.027367   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.027374   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:42.027379   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:42.027431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:42.068348   62139 cri.go:89] found id: ""
	I0416 01:03:42.068377   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.068387   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:42.068394   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:42.068457   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:38.220198   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:40.220481   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:42.221383   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:38.330729   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:40.831615   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:38.620021   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:40.620641   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:42.620702   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:42.108157   62139 cri.go:89] found id: ""
	I0416 01:03:42.108181   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.108187   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:42.108193   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:42.108244   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:42.149749   62139 cri.go:89] found id: ""
	I0416 01:03:42.149770   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.149777   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:42.149784   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:42.149848   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:42.185322   62139 cri.go:89] found id: ""
	I0416 01:03:42.185349   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.185360   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:42.185368   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:42.185435   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:42.224334   62139 cri.go:89] found id: ""
	I0416 01:03:42.224359   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.224370   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:42.224376   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:42.224435   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:42.263466   62139 cri.go:89] found id: ""
	I0416 01:03:42.263494   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.263502   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:42.263509   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:42.263522   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:42.315106   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:42.315139   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:42.329394   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:42.329425   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:42.405267   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:42.405305   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:42.405321   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:42.486126   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:42.486168   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:45.027718   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:45.042387   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:45.042453   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:45.080790   62139 cri.go:89] found id: ""
	I0416 01:03:45.080814   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.080823   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:45.080829   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:45.080875   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:45.121278   62139 cri.go:89] found id: ""
	I0416 01:03:45.121306   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.121317   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:45.121324   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:45.121383   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:45.158076   62139 cri.go:89] found id: ""
	I0416 01:03:45.158099   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.158107   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:45.158116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:45.158162   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:45.195577   62139 cri.go:89] found id: ""
	I0416 01:03:45.195608   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.195619   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:45.195627   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:45.195685   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:45.239230   62139 cri.go:89] found id: ""
	I0416 01:03:45.239257   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.239267   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:45.239275   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:45.239326   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:45.279193   62139 cri.go:89] found id: ""
	I0416 01:03:45.279220   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.279227   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:45.279232   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:45.279280   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:45.314876   62139 cri.go:89] found id: ""
	I0416 01:03:45.314908   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.314916   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:45.314922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:45.314970   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:45.351699   62139 cri.go:89] found id: ""
	I0416 01:03:45.351723   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.351730   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:45.351738   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:45.351750   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:45.392681   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:45.392708   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:45.446564   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:45.446605   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:45.460541   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:45.460564   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:45.535287   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:45.535319   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:45.535334   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:44.720088   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:46.721511   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:43.329413   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:45.330644   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:45.123357   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:47.621806   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:48.117476   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:48.133341   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:48.133402   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:48.171230   62139 cri.go:89] found id: ""
	I0416 01:03:48.171263   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.171273   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:48.171280   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:48.171337   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:48.206188   62139 cri.go:89] found id: ""
	I0416 01:03:48.206218   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.206229   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:48.206236   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:48.206294   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:48.242349   62139 cri.go:89] found id: ""
	I0416 01:03:48.242377   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.242384   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:48.242389   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:48.242437   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:48.278324   62139 cri.go:89] found id: ""
	I0416 01:03:48.278347   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.278355   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:48.278360   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:48.278406   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:48.315727   62139 cri.go:89] found id: ""
	I0416 01:03:48.315753   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.315763   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:48.315770   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:48.315828   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:48.354146   62139 cri.go:89] found id: ""
	I0416 01:03:48.354169   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.354176   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:48.354182   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:48.354242   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:48.393951   62139 cri.go:89] found id: ""
	I0416 01:03:48.393989   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.394000   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:48.394007   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:48.394081   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:48.431849   62139 cri.go:89] found id: ""
	I0416 01:03:48.431887   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.431895   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:48.431903   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:48.431917   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:48.446210   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:48.446242   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:48.517459   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:48.517485   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:48.517500   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:48.596320   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:48.596356   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:48.639700   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:48.639733   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:51.197396   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:51.211803   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:51.211889   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:51.250768   62139 cri.go:89] found id: ""
	I0416 01:03:51.250793   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.250802   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:51.250810   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:51.250872   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:51.291389   62139 cri.go:89] found id: ""
	I0416 01:03:51.291415   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.291421   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:51.291429   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:51.291478   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:51.332466   62139 cri.go:89] found id: ""
	I0416 01:03:51.332490   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.332499   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:51.332504   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:51.332549   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:51.367731   62139 cri.go:89] found id: ""
	I0416 01:03:51.367759   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.367767   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:51.367773   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:51.367829   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:51.400567   62139 cri.go:89] found id: ""
	I0416 01:03:51.400599   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.400609   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:51.400616   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:51.400679   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:51.433561   62139 cri.go:89] found id: ""
	I0416 01:03:51.433590   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.433598   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:51.433608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:51.433666   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:51.469136   62139 cri.go:89] found id: ""
	I0416 01:03:51.469179   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.469189   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:51.469196   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:51.469255   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:51.504410   62139 cri.go:89] found id: ""
	I0416 01:03:51.504442   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.504452   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:51.504462   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:51.504480   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:51.557420   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:51.557449   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:51.571481   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:51.571506   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:51.648722   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:51.648744   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:51.648755   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:51.728945   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:51.728978   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:49.221614   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:51.721798   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:47.829985   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:50.329419   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:52.329909   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:49.622776   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:52.120080   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:54.272503   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:54.286573   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:54.286646   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:54.321084   62139 cri.go:89] found id: ""
	I0416 01:03:54.321115   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.321125   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:54.321133   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:54.321208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:54.366333   62139 cri.go:89] found id: ""
	I0416 01:03:54.366364   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.366374   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:54.366380   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:54.366437   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:54.406267   62139 cri.go:89] found id: ""
	I0416 01:03:54.406317   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.406328   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:54.406336   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:54.406405   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:54.446853   62139 cri.go:89] found id: ""
	I0416 01:03:54.446883   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.446894   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:54.446901   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:54.446956   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:54.487658   62139 cri.go:89] found id: ""
	I0416 01:03:54.487683   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.487690   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:54.487696   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:54.487753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:54.530189   62139 cri.go:89] found id: ""
	I0416 01:03:54.530216   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.530226   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:54.530232   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:54.530289   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:54.571317   62139 cri.go:89] found id: ""
	I0416 01:03:54.571341   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.571349   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:54.571354   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:54.571416   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:54.612432   62139 cri.go:89] found id: ""
	I0416 01:03:54.612458   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.612467   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:54.612478   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:54.612493   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:54.666599   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:54.666629   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:54.680880   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:54.680915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:54.757365   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:54.757386   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:54.757398   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:54.834436   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:54.834468   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:54.219690   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:56.220753   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:54.332950   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:56.830167   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:54.621002   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:56.622452   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:57.405516   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:57.420694   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:57.420773   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:57.460338   62139 cri.go:89] found id: ""
	I0416 01:03:57.460367   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.460374   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:57.460381   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:57.460442   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:57.498121   62139 cri.go:89] found id: ""
	I0416 01:03:57.498150   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.498160   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:57.498167   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:57.498228   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:57.536959   62139 cri.go:89] found id: ""
	I0416 01:03:57.536989   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.537005   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:57.537014   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:57.537077   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:57.575633   62139 cri.go:89] found id: ""
	I0416 01:03:57.575662   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.575673   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:57.575680   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:57.575743   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:57.614459   62139 cri.go:89] found id: ""
	I0416 01:03:57.614491   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.614501   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:57.614509   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:57.614568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:57.657078   62139 cri.go:89] found id: ""
	I0416 01:03:57.657109   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.657120   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:57.657127   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:57.657204   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:57.693882   62139 cri.go:89] found id: ""
	I0416 01:03:57.693904   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.693911   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:57.693922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:57.693969   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:57.731283   62139 cri.go:89] found id: ""
	I0416 01:03:57.731312   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.731320   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:57.731327   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:57.731338   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:57.782618   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:57.782656   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:57.796763   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:57.796794   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:57.869629   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:57.869652   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:57.869665   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:57.948859   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:57.948892   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:00.487682   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:00.501095   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:00.501182   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:00.537902   62139 cri.go:89] found id: ""
	I0416 01:04:00.537931   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.537939   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:00.537945   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:00.537994   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:00.574164   62139 cri.go:89] found id: ""
	I0416 01:04:00.574203   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.574214   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:00.574222   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:00.574287   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:00.629592   62139 cri.go:89] found id: ""
	I0416 01:04:00.629615   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.629622   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:00.629627   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:00.629679   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:00.672102   62139 cri.go:89] found id: ""
	I0416 01:04:00.672127   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.672134   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:00.672141   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:00.672201   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:00.715040   62139 cri.go:89] found id: ""
	I0416 01:04:00.715064   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.715072   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:00.715078   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:00.715139   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:00.751113   62139 cri.go:89] found id: ""
	I0416 01:04:00.751137   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.751146   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:00.751152   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:00.751204   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:00.787613   62139 cri.go:89] found id: ""
	I0416 01:04:00.787644   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.787653   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:00.787660   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:00.787721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:00.824244   62139 cri.go:89] found id: ""
	I0416 01:04:00.824271   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.824280   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:00.824291   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:00.824304   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:00.899977   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:00.900014   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:00.900029   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:00.982317   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:00.982350   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:01.026354   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:01.026393   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:01.080393   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:01.080441   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:58.720894   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:00.720961   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:59.329460   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:01.330171   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:59.119259   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:01.619026   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:03.595966   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:03.609190   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:03.609253   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:03.647151   62139 cri.go:89] found id: ""
	I0416 01:04:03.647183   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.647197   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:03.647203   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:03.647250   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:03.685211   62139 cri.go:89] found id: ""
	I0416 01:04:03.685239   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.685248   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:03.685254   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:03.685303   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:03.720928   62139 cri.go:89] found id: ""
	I0416 01:04:03.720949   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.720956   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:03.720961   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:03.721035   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:03.759179   62139 cri.go:89] found id: ""
	I0416 01:04:03.759210   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.759220   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:03.759228   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:03.759290   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:03.795670   62139 cri.go:89] found id: ""
	I0416 01:04:03.795700   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.795710   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:03.795717   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:03.795785   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:03.832944   62139 cri.go:89] found id: ""
	I0416 01:04:03.832971   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.832980   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:03.832988   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:03.833053   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:03.869211   62139 cri.go:89] found id: ""
	I0416 01:04:03.869238   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.869248   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:03.869256   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:03.869317   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:03.905859   62139 cri.go:89] found id: ""
	I0416 01:04:03.905888   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.905896   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:03.905904   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:03.905915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:03.957057   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:03.957088   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:03.972309   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:03.972344   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:04.049927   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:04.049950   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:04.049965   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:04.136395   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:04.136435   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:06.676667   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:06.690062   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:06.690125   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:06.733734   62139 cri.go:89] found id: ""
	I0416 01:04:06.733758   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.733773   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:06.733782   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:06.733835   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:06.773112   62139 cri.go:89] found id: ""
	I0416 01:04:06.773140   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.773147   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:06.773152   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:06.773231   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:06.812786   62139 cri.go:89] found id: ""
	I0416 01:04:06.812809   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.812817   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:06.812822   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:06.812870   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:06.853995   62139 cri.go:89] found id: ""
	I0416 01:04:06.854022   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.854029   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:06.854034   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:06.854088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:06.893809   62139 cri.go:89] found id: ""
	I0416 01:04:06.893841   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.893848   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:06.893853   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:06.893909   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:06.929389   62139 cri.go:89] found id: ""
	I0416 01:04:06.929419   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.929430   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:06.929437   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:06.929518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:06.968278   62139 cri.go:89] found id: ""
	I0416 01:04:06.968303   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.968311   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:06.968316   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:06.968364   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:07.018932   62139 cri.go:89] found id: ""
	I0416 01:04:07.018965   62139 logs.go:276] 0 containers: []
	W0416 01:04:07.018976   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:07.018989   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:07.019003   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:07.083611   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:07.083645   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:03.220314   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:05.720941   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:03.830050   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:06.329416   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:03.619482   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:05.620393   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:07.110126   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:07.110152   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:07.186262   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:07.186290   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:07.186305   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:07.263139   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:07.263170   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:09.807489   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:09.822045   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:09.822110   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:09.867444   62139 cri.go:89] found id: ""
	I0416 01:04:09.867469   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.867480   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:09.867487   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:09.867538   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:09.904280   62139 cri.go:89] found id: ""
	I0416 01:04:09.904312   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.904323   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:09.904330   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:09.904389   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:09.941066   62139 cri.go:89] found id: ""
	I0416 01:04:09.941091   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.941099   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:09.941107   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:09.941189   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:09.975739   62139 cri.go:89] found id: ""
	I0416 01:04:09.975767   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.975777   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:09.975785   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:09.975844   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:10.011414   62139 cri.go:89] found id: ""
	I0416 01:04:10.011444   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.011454   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:10.011461   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:10.011528   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:10.045670   62139 cri.go:89] found id: ""
	I0416 01:04:10.045695   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.045704   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:10.045711   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:10.045777   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:10.082320   62139 cri.go:89] found id: ""
	I0416 01:04:10.082352   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.082361   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:10.082368   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:10.082428   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:10.120453   62139 cri.go:89] found id: ""
	I0416 01:04:10.120482   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.120492   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:10.120501   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:10.120515   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:10.200213   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:10.200251   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:10.251709   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:10.251742   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:10.307348   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:10.307382   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:10.321293   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:10.321319   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:10.401361   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:08.220488   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:10.221408   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:08.331985   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:10.829244   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:08.119800   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:10.121093   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:12.126420   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:12.901763   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:12.916308   62139 kubeadm.go:591] duration metric: took 4m4.703830076s to restartPrimaryControlPlane
	W0416 01:04:12.916384   62139 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:04:12.916416   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:04:12.720462   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:14.721516   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:17.220364   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:12.830409   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:15.330184   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:14.620714   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:16.622203   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:17.897436   62139 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.980993606s)
	I0416 01:04:17.897592   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:04:17.914655   62139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:04:17.927482   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:04:17.940210   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:04:17.940233   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:04:17.940274   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:04:17.951037   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:04:17.951106   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:04:17.962341   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:04:17.972436   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:04:17.972500   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:04:17.983198   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:04:17.992856   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:04:17.992912   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:04:18.003122   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:04:18.014064   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:04:18.014117   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:04:18.024854   62139 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:04:18.101381   62139 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 01:04:18.101436   62139 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:04:18.246529   62139 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:04:18.246687   62139 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:04:18.246802   62139 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:04:18.456847   62139 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:04:18.458980   62139 out.go:204]   - Generating certificates and keys ...
	I0416 01:04:18.459096   62139 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:04:18.459190   62139 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:04:18.459294   62139 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:04:18.459381   62139 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:04:18.459473   62139 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:04:18.459548   62139 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:04:18.459631   62139 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:04:18.459721   62139 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:04:18.459822   62139 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:04:18.460281   62139 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:04:18.460387   62139 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:04:18.460475   62139 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:04:18.564910   62139 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:04:18.806406   62139 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:04:18.890124   62139 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:04:19.046415   62139 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:04:19.063159   62139 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:04:19.063301   62139 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:04:19.063415   62139 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:04:19.229066   62139 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:04:19.231110   62139 out.go:204]   - Booting up control plane ...
	I0416 01:04:19.231246   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:04:19.248833   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:04:19.250340   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:04:19.251664   62139 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:04:19.254678   62139 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:04:19.221976   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:21.720239   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:17.830011   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:18.323271   61500 pod_ready.go:81] duration metric: took 4m0.000449424s for pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace to be "Ready" ...
	E0416 01:04:18.323300   61500 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0416 01:04:18.323318   61500 pod_ready.go:38] duration metric: took 4m9.009725319s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:04:18.323357   61500 kubeadm.go:591] duration metric: took 4m19.656264138s to restartPrimaryControlPlane
	W0416 01:04:18.323420   61500 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:04:18.323449   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:04:19.122802   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:21.621389   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:24.227649   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:26.720896   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:24.119577   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:26.620166   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:29.219937   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:31.220697   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:28.622399   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:31.119279   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:33.221240   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:35.221536   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:33.124909   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:35.620718   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:37.720528   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:40.220531   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:38.120415   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:40.121126   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:42.620161   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:42.719946   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:44.720203   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:47.219782   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:44.620806   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:47.119479   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:47.613243   62747 pod_ready.go:81] duration metric: took 4m0.000098534s for pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace to be "Ready" ...
	E0416 01:04:47.613279   62747 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0416 01:04:47.613297   62747 pod_ready.go:38] duration metric: took 4m12.544704519s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:04:47.613327   62747 kubeadm.go:591] duration metric: took 4m20.76891948s to restartPrimaryControlPlane
	W0416 01:04:47.613387   62747 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:04:47.613410   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:04:50.224993   61500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.901526458s)
	I0416 01:04:50.225057   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:04:50.241083   61500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:04:50.252468   61500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:04:50.263721   61500 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:04:50.263744   61500 kubeadm.go:156] found existing configuration files:
	
	I0416 01:04:50.263786   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:04:50.274550   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:04:50.274620   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:04:50.285019   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:04:50.295079   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:04:50.295151   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:04:50.306424   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:04:50.317221   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:04:50.317286   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:04:50.327783   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:04:50.338144   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:04:50.338213   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:04:50.349262   61500 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:04:50.410467   61500 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.2
	I0416 01:04:50.410597   61500 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:04:50.565288   61500 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:04:50.565442   61500 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:04:50.565580   61500 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:04:50.783173   61500 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:04:50.785219   61500 out.go:204]   - Generating certificates and keys ...
	I0416 01:04:50.785339   61500 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:04:50.785427   61500 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:04:50.785526   61500 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:04:50.785620   61500 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:04:50.785745   61500 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:04:50.785847   61500 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:04:50.785951   61500 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:04:50.786037   61500 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:04:50.786156   61500 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:04:50.786279   61500 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:04:50.786341   61500 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:04:50.786425   61500 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:04:50.868738   61500 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:04:51.024628   61500 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 01:04:51.304801   61500 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:04:51.485803   61500 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:04:51.614330   61500 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:04:51.615043   61500 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:04:51.617465   61500 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:04:49.720594   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:51.721464   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:51.619398   61500 out.go:204]   - Booting up control plane ...
	I0416 01:04:51.619519   61500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:04:51.619637   61500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:04:51.619717   61500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:04:51.640756   61500 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:04:51.643264   61500 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:04:51.643617   61500 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:04:51.796506   61500 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0416 01:04:51.796640   61500 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0416 01:04:54.220965   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:56.222571   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:52.798698   61500 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002359416s
	I0416 01:04:52.798798   61500 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0416 01:04:57.802689   61500 kubeadm.go:309] [api-check] The API server is healthy after 5.003967397s
	I0416 01:04:57.816580   61500 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 01:04:57.840465   61500 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 01:04:57.879611   61500 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 01:04:57.879906   61500 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-572602 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 01:04:57.895211   61500 kubeadm.go:309] [bootstrap-token] Using token: w1qt2t.vu77oqcsegb1grvk
	I0416 01:04:57.896829   61500 out.go:204]   - Configuring RBAC rules ...
	I0416 01:04:57.896958   61500 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 01:04:57.905289   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 01:04:57.916967   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 01:04:57.922660   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 01:04:57.926143   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 01:04:57.935222   61500 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 01:04:58.215180   61500 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 01:04:58.656120   61500 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 01:04:59.209811   61500 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 01:04:59.211274   61500 kubeadm.go:309] 
	I0416 01:04:59.211354   61500 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 01:04:59.211390   61500 kubeadm.go:309] 
	I0416 01:04:59.211489   61500 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 01:04:59.211512   61500 kubeadm.go:309] 
	I0416 01:04:59.211556   61500 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 01:04:59.211626   61500 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 01:04:59.211695   61500 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 01:04:59.211707   61500 kubeadm.go:309] 
	I0416 01:04:59.211779   61500 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 01:04:59.211789   61500 kubeadm.go:309] 
	I0416 01:04:59.211853   61500 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 01:04:59.211921   61500 kubeadm.go:309] 
	I0416 01:04:59.212030   61500 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 01:04:59.212165   61500 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 01:04:59.212269   61500 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 01:04:59.212280   61500 kubeadm.go:309] 
	I0416 01:04:59.212407   61500 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 01:04:59.212516   61500 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 01:04:59.212525   61500 kubeadm.go:309] 
	I0416 01:04:59.212656   61500 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token w1qt2t.vu77oqcsegb1grvk \
	I0416 01:04:59.212835   61500 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0416 01:04:59.212880   61500 kubeadm.go:309] 	--control-plane 
	I0416 01:04:59.212894   61500 kubeadm.go:309] 
	I0416 01:04:59.212996   61500 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 01:04:59.213007   61500 kubeadm.go:309] 
	I0416 01:04:59.213111   61500 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token w1qt2t.vu77oqcsegb1grvk \
	I0416 01:04:59.213278   61500 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0416 01:04:59.213435   61500 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:04:59.213460   61500 cni.go:84] Creating CNI manager for ""
	I0416 01:04:59.213477   61500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:04:59.215397   61500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:04:59.255478   62139 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 01:04:59.256524   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:04:59.256807   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:04:58.720339   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:01.220968   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:59.216764   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:04:59.230134   61500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:04:59.250739   61500 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 01:04:59.250773   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:04:59.250775   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-572602 minikube.k8s.io/updated_at=2024_04_16T01_04_59_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=no-preload-572602 minikube.k8s.io/primary=true
	I0416 01:04:59.462907   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:04:59.462915   61500 ops.go:34] apiserver oom_adj: -16
	I0416 01:04:59.962977   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:00.463142   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:00.963871   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:01.463866   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:01.963356   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:02.463729   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:04.257472   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:04.257756   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:03.720762   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:05.721421   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:02.963816   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:03.463370   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:03.963655   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:04.463681   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:04.963387   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:05.462926   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:05.963659   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:06.463091   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:06.963504   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:07.463783   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:07.963037   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:08.463212   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:08.963443   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:09.463179   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:09.963188   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:10.463264   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:10.963863   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:11.463051   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:11.591367   61500 kubeadm.go:1107] duration metric: took 12.340665724s to wait for elevateKubeSystemPrivileges
	W0416 01:05:11.591410   61500 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:05:11.591425   61500 kubeadm.go:393] duration metric: took 5m12.980123227s to StartCluster
	I0416 01:05:11.591451   61500 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:11.591559   61500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:05:11.593498   61500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:11.593838   61500 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:05:11.595572   61500 out.go:177] * Verifying Kubernetes components...
	I0416 01:05:11.593961   61500 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:05:11.594060   61500 config.go:182] Loaded profile config "no-preload-572602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 01:05:11.597038   61500 addons.go:69] Setting default-storageclass=true in profile "no-preload-572602"
	I0416 01:05:11.597047   61500 addons.go:69] Setting metrics-server=true in profile "no-preload-572602"
	I0416 01:05:11.597077   61500 addons.go:234] Setting addon metrics-server=true in "no-preload-572602"
	I0416 01:05:11.597081   61500 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-572602"
	W0416 01:05:11.597084   61500 addons.go:243] addon metrics-server should already be in state true
	I0416 01:05:11.597168   61500 host.go:66] Checking if "no-preload-572602" exists ...
	I0416 01:05:11.597042   61500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:05:11.597038   61500 addons.go:69] Setting storage-provisioner=true in profile "no-preload-572602"
	I0416 01:05:11.597274   61500 addons.go:234] Setting addon storage-provisioner=true in "no-preload-572602"
	W0416 01:05:11.597281   61500 addons.go:243] addon storage-provisioner should already be in state true
	I0416 01:05:11.597300   61500 host.go:66] Checking if "no-preload-572602" exists ...
	I0416 01:05:11.597516   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.597563   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.597590   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.597621   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.597621   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.597684   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.617344   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
	I0416 01:05:11.617833   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46345
	I0416 01:05:11.617853   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.618040   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I0416 01:05:11.618170   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.618385   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.618539   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.618564   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.618682   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.618708   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.618786   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.618806   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.619020   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.619035   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.619145   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.619371   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.619629   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.619663   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.619683   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.619715   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.622758   61500 addons.go:234] Setting addon default-storageclass=true in "no-preload-572602"
	W0416 01:05:11.622784   61500 addons.go:243] addon default-storageclass should already be in state true
	I0416 01:05:11.622814   61500 host.go:66] Checking if "no-preload-572602" exists ...
	I0416 01:05:11.623148   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.623182   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.640851   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44015
	I0416 01:05:11.641427   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.642008   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.642028   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.642429   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.642635   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.643204   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I0416 01:05:11.643239   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38953
	I0416 01:05:11.643578   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.643673   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.644133   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.644150   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.644398   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.644409   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.644508   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.644786   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.644823   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.645630   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 01:05:11.645797   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.645824   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.648522   61500 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 01:05:11.646649   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 01:05:11.650173   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 01:05:11.650185   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 01:05:11.650206   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 01:05:11.652524   61500 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:05:07.721798   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:08.214615   61267 pod_ready.go:81] duration metric: took 4m0.001005317s for pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace to be "Ready" ...
	E0416 01:05:08.214650   61267 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0416 01:05:08.214688   61267 pod_ready.go:38] duration metric: took 4m14.521894608s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:08.214750   61267 kubeadm.go:591] duration metric: took 4m22.563492336s to restartPrimaryControlPlane
	W0416 01:05:08.214821   61267 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:05:08.214857   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:05:11.654173   61500 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:11.654189   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:05:11.654207   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 01:05:11.654021   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.654488   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 01:05:11.654524   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.654823   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 01:05:11.655016   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 01:05:11.655159   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 01:05:11.655331   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 01:05:11.657706   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.658193   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 01:05:11.658214   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.658388   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 01:05:11.658585   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 01:05:11.658761   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 01:05:11.658937   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 01:05:11.669485   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34717
	I0416 01:05:11.669878   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.670340   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.670352   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.670714   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.670887   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.672571   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 01:05:11.672888   61500 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:11.672900   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:05:11.672912   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 01:05:11.675816   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.676163   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 01:05:11.676182   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.676335   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 01:05:11.676513   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 01:05:11.676657   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 01:05:11.676799   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 01:05:11.822229   61500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:05:11.850495   61500 node_ready.go:35] waiting up to 6m0s for node "no-preload-572602" to be "Ready" ...
	I0416 01:05:11.868828   61500 node_ready.go:49] node "no-preload-572602" has status "Ready":"True"
	I0416 01:05:11.868852   61500 node_ready.go:38] duration metric: took 18.327813ms for node "no-preload-572602" to be "Ready" ...
	I0416 01:05:11.868860   61500 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:11.877018   61500 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.884190   61500 pod_ready.go:92] pod "etcd-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:11.884221   61500 pod_ready.go:81] duration metric: took 7.173699ms for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.884234   61500 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.901639   61500 pod_ready.go:92] pod "kube-apiserver-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:11.901672   61500 pod_ready.go:81] duration metric: took 17.430111ms for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.901684   61500 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.911839   61500 pod_ready.go:92] pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:11.911871   61500 pod_ready.go:81] duration metric: took 10.178219ms for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.911885   61500 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.936265   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 01:05:11.936293   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 01:05:11.939406   61500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:11.942233   61500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:11.963094   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 01:05:11.963123   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 01:05:12.027316   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:12.027341   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 01:05:12.150413   61500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:12.387284   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.387310   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.387640   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.387665   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.387674   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.387682   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.387973   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.387991   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.395148   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.395179   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.395459   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:12.395488   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.395508   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.930331   61500 pod_ready.go:92] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:12.930362   61500 pod_ready.go:81] duration metric: took 1.01846846s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:12.930373   61500 pod_ready.go:38] duration metric: took 1.061502471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:12.930390   61500 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:05:12.930454   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:05:12.990840   61500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.048571147s)
	I0416 01:05:12.990905   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.990919   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.991246   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:12.991309   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.991323   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.991380   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.991391   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.991617   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:12.991669   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.991690   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:13.719959   61500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.569495387s)
	I0416 01:05:13.720018   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:13.720023   61500 api_server.go:72] duration metric: took 2.12614679s to wait for apiserver process to appear ...
	I0416 01:05:13.720046   61500 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:05:13.720066   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:05:13.720034   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:13.720435   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:13.720458   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:13.720469   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:13.720472   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:13.720477   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:13.720670   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:13.720681   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:13.720691   61500 addons.go:470] Verifying addon metrics-server=true in "no-preload-572602"
	I0416 01:05:13.722348   61500 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0416 01:05:13.723686   61500 addons.go:505] duration metric: took 2.129734353s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0416 01:05:13.764481   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 200:
	ok
	I0416 01:05:13.771661   61500 api_server.go:141] control plane version: v1.30.0-rc.2
	I0416 01:05:13.771690   61500 api_server.go:131] duration metric: took 51.637739ms to wait for apiserver health ...
	I0416 01:05:13.771698   61500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:05:13.812701   61500 system_pods.go:59] 9 kube-system pods found
	I0416 01:05:13.812744   61500 system_pods.go:61] "coredns-7db6d8ff4d-2b5ht" [b8d48a4c-6efd-409a-98be-3ec5bf639470] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.812753   61500 system_pods.go:61] "coredns-7db6d8ff4d-p62sn" [36768eb2-2a22-48e1-b271-f262aa64e014] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.812761   61500 system_pods.go:61] "etcd-no-preload-572602" [c9ed4f86-07f3-48d6-948c-8c4243920512] Running
	I0416 01:05:13.812765   61500 system_pods.go:61] "kube-apiserver-no-preload-572602" [a92513a3-4129-41a2-a603-4a69f4e72041] Running
	I0416 01:05:13.812768   61500 system_pods.go:61] "kube-controller-manager-no-preload-572602" [ce013e5b-5d3c-42de-8a00-c7041288740b] Running
	I0416 01:05:13.812774   61500 system_pods.go:61] "kube-proxy-6cjlc" [2c4d9303-8c08-4385-a6b9-63dda0d9a274] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0416 01:05:13.812777   61500 system_pods.go:61] "kube-scheduler-no-preload-572602" [a9f71ca2-f211-4e6d-9940-4e0af5d4287e] Running
	I0416 01:05:13.812783   61500 system_pods.go:61] "metrics-server-569cc877fc-5j5rc" [3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:13.812792   61500 system_pods.go:61] "storage-provisioner" [b9ac9c93-0e50-4598-a9c4-a12e4ff14063] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:13.812802   61500 system_pods.go:74] duration metric: took 41.098881ms to wait for pod list to return data ...
	I0416 01:05:13.812811   61500 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:05:13.847288   61500 default_sa.go:45] found service account: "default"
	I0416 01:05:13.847323   61500 default_sa.go:55] duration metric: took 34.500938ms for default service account to be created ...
	I0416 01:05:13.847335   61500 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:05:13.877107   61500 system_pods.go:86] 9 kube-system pods found
	I0416 01:05:13.877150   61500 system_pods.go:89] "coredns-7db6d8ff4d-2b5ht" [b8d48a4c-6efd-409a-98be-3ec5bf639470] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.877175   61500 system_pods.go:89] "coredns-7db6d8ff4d-p62sn" [36768eb2-2a22-48e1-b271-f262aa64e014] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.877185   61500 system_pods.go:89] "etcd-no-preload-572602" [c9ed4f86-07f3-48d6-948c-8c4243920512] Running
	I0416 01:05:13.877194   61500 system_pods.go:89] "kube-apiserver-no-preload-572602" [a92513a3-4129-41a2-a603-4a69f4e72041] Running
	I0416 01:05:13.877200   61500 system_pods.go:89] "kube-controller-manager-no-preload-572602" [ce013e5b-5d3c-42de-8a00-c7041288740b] Running
	I0416 01:05:13.877209   61500 system_pods.go:89] "kube-proxy-6cjlc" [2c4d9303-8c08-4385-a6b9-63dda0d9a274] Running
	I0416 01:05:13.877215   61500 system_pods.go:89] "kube-scheduler-no-preload-572602" [a9f71ca2-f211-4e6d-9940-4e0af5d4287e] Running
	I0416 01:05:13.877224   61500 system_pods.go:89] "metrics-server-569cc877fc-5j5rc" [3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:13.877237   61500 system_pods.go:89] "storage-provisioner" [b9ac9c93-0e50-4598-a9c4-a12e4ff14063] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:13.877257   61500 retry.go:31] will retry after 239.706522ms: missing components: kube-dns
	I0416 01:05:14.128770   61500 system_pods.go:86] 9 kube-system pods found
	I0416 01:05:14.128814   61500 system_pods.go:89] "coredns-7db6d8ff4d-2b5ht" [b8d48a4c-6efd-409a-98be-3ec5bf639470] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:14.128827   61500 system_pods.go:89] "coredns-7db6d8ff4d-p62sn" [36768eb2-2a22-48e1-b271-f262aa64e014] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:14.128836   61500 system_pods.go:89] "etcd-no-preload-572602" [c9ed4f86-07f3-48d6-948c-8c4243920512] Running
	I0416 01:05:14.128850   61500 system_pods.go:89] "kube-apiserver-no-preload-572602" [a92513a3-4129-41a2-a603-4a69f4e72041] Running
	I0416 01:05:14.128857   61500 system_pods.go:89] "kube-controller-manager-no-preload-572602" [ce013e5b-5d3c-42de-8a00-c7041288740b] Running
	I0416 01:05:14.128864   61500 system_pods.go:89] "kube-proxy-6cjlc" [2c4d9303-8c08-4385-a6b9-63dda0d9a274] Running
	I0416 01:05:14.128871   61500 system_pods.go:89] "kube-scheduler-no-preload-572602" [a9f71ca2-f211-4e6d-9940-4e0af5d4287e] Running
	I0416 01:05:14.128885   61500 system_pods.go:89] "metrics-server-569cc877fc-5j5rc" [3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:14.128893   61500 system_pods.go:89] "storage-provisioner" [b9ac9c93-0e50-4598-a9c4-a12e4ff14063] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:14.128903   61500 system_pods.go:126] duration metric: took 281.561287ms to wait for k8s-apps to be running ...
	I0416 01:05:14.128912   61500 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:05:14.128978   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:14.145557   61500 system_svc.go:56] duration metric: took 16.639555ms WaitForService to wait for kubelet
	I0416 01:05:14.145582   61500 kubeadm.go:576] duration metric: took 2.551711031s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:05:14.145605   61500 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:05:14.149984   61500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:05:14.150009   61500 node_conditions.go:123] node cpu capacity is 2
	I0416 01:05:14.150021   61500 node_conditions.go:105] duration metric: took 4.410684ms to run NodePressure ...
	I0416 01:05:14.150034   61500 start.go:240] waiting for startup goroutines ...
	I0416 01:05:14.150044   61500 start.go:245] waiting for cluster config update ...
	I0416 01:05:14.150064   61500 start.go:254] writing updated cluster config ...
	I0416 01:05:14.150354   61500 ssh_runner.go:195] Run: rm -f paused
	I0416 01:05:14.198605   61500 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.2 (minor skew: 1)
	I0416 01:05:14.200584   61500 out.go:177] * Done! kubectl is now configured to use "no-preload-572602" cluster and "default" namespace by default
	I0416 01:05:14.258629   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:14.258807   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:19.748784   62747 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.135339447s)
	I0416 01:05:19.748866   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:19.766280   62747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:05:19.777541   62747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:05:19.788086   62747 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:05:19.788112   62747 kubeadm.go:156] found existing configuration files:
	
	I0416 01:05:19.788154   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:05:19.798135   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:05:19.798211   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:05:19.809231   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:05:19.819447   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:05:19.819519   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:05:19.830223   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:05:19.840460   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:05:19.840528   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:05:19.851506   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:05:19.861422   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:05:19.861481   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:05:19.871239   62747 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:05:20.089849   62747 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:05:29.079351   62747 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 01:05:29.079435   62747 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:05:29.079534   62747 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:05:29.079679   62747 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:05:29.079817   62747 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:05:29.079934   62747 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:05:29.081701   62747 out.go:204]   - Generating certificates and keys ...
	I0416 01:05:29.081801   62747 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:05:29.081922   62747 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:05:29.082035   62747 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:05:29.082125   62747 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:05:29.082300   62747 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:05:29.082404   62747 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:05:29.082504   62747 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:05:29.082556   62747 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:05:29.082621   62747 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:05:29.082737   62747 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:05:29.082798   62747 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:05:29.082867   62747 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:05:29.082955   62747 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:05:29.083042   62747 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 01:05:29.083129   62747 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:05:29.083209   62747 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:05:29.083278   62747 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:05:29.083385   62747 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:05:29.083467   62747 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:05:29.085050   62747 out.go:204]   - Booting up control plane ...
	I0416 01:05:29.085178   62747 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:05:29.085289   62747 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:05:29.085374   62747 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:05:29.085499   62747 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:05:29.085610   62747 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:05:29.085671   62747 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:05:29.085942   62747 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:05:29.086066   62747 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003717 seconds
	I0416 01:05:29.086227   62747 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 01:05:29.086384   62747 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 01:05:29.086474   62747 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 01:05:29.086755   62747 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-617092 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 01:05:29.086843   62747 kubeadm.go:309] [bootstrap-token] Using token: 33ihar.pt6l329bwmm6yhnr
	I0416 01:05:29.088273   62747 out.go:204]   - Configuring RBAC rules ...
	I0416 01:05:29.088408   62747 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 01:05:29.088516   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 01:05:29.088712   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 01:05:29.088898   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 01:05:29.089046   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 01:05:29.089196   62747 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 01:05:29.089346   62747 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 01:05:29.089413   62747 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 01:05:29.089486   62747 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 01:05:29.089496   62747 kubeadm.go:309] 
	I0416 01:05:29.089581   62747 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 01:05:29.089591   62747 kubeadm.go:309] 
	I0416 01:05:29.089707   62747 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 01:05:29.089719   62747 kubeadm.go:309] 
	I0416 01:05:29.089768   62747 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 01:05:29.089855   62747 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 01:05:29.089932   62747 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 01:05:29.089942   62747 kubeadm.go:309] 
	I0416 01:05:29.090020   62747 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 01:05:29.090041   62747 kubeadm.go:309] 
	I0416 01:05:29.090111   62747 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 01:05:29.090120   62747 kubeadm.go:309] 
	I0416 01:05:29.090193   62747 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 01:05:29.090350   62747 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 01:05:29.090434   62747 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 01:05:29.090445   62747 kubeadm.go:309] 
	I0416 01:05:29.090560   62747 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 01:05:29.090661   62747 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 01:05:29.090667   62747 kubeadm.go:309] 
	I0416 01:05:29.090773   62747 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 33ihar.pt6l329bwmm6yhnr \
	I0416 01:05:29.090921   62747 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0416 01:05:29.090942   62747 kubeadm.go:309] 	--control-plane 
	I0416 01:05:29.090948   62747 kubeadm.go:309] 
	I0416 01:05:29.091017   62747 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 01:05:29.091034   62747 kubeadm.go:309] 
	I0416 01:05:29.091153   62747 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 33ihar.pt6l329bwmm6yhnr \
	I0416 01:05:29.091299   62747 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0416 01:05:29.091313   62747 cni.go:84] Creating CNI manager for ""
	I0416 01:05:29.091323   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:05:29.094154   62747 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:05:29.095747   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:05:29.153706   62747 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:05:29.195477   62747 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 01:05:29.195540   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:29.195540   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-617092 minikube.k8s.io/updated_at=2024_04_16T01_05_29_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=embed-certs-617092 minikube.k8s.io/primary=true
	I0416 01:05:29.551888   62747 ops.go:34] apiserver oom_adj: -16
	I0416 01:05:29.552023   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:30.053117   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:30.552298   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:31.052317   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:31.553057   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:32.052852   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:32.552921   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:34.259492   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:34.259704   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:33.052747   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:33.552301   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:34.052922   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:34.552338   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:35.052106   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:35.552911   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:36.052814   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:36.552077   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:37.052666   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:37.552057   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:38.053198   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:38.552163   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:39.052589   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:39.552701   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:40.053069   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:40.552436   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:41.053071   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:41.158552   62747 kubeadm.go:1107] duration metric: took 11.963074905s to wait for elevateKubeSystemPrivileges
	W0416 01:05:41.158601   62747 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:05:41.158611   62747 kubeadm.go:393] duration metric: took 5m14.369080866s to StartCluster
	I0416 01:05:41.158638   62747 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:41.158736   62747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:05:41.160903   62747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:41.161229   62747 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:05:41.163312   62747 out.go:177] * Verifying Kubernetes components...
	I0416 01:05:40.562916   61267 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.348033752s)
	I0416 01:05:40.562991   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:40.580700   61267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:05:40.592069   61267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:05:40.606450   61267 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:05:40.606477   61267 kubeadm.go:156] found existing configuration files:
	
	I0416 01:05:40.606531   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0416 01:05:40.617547   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:05:40.617622   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:05:40.631465   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0416 01:05:40.644464   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:05:40.644553   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:05:40.655929   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0416 01:05:40.664995   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:05:40.665059   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:05:40.674477   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0416 01:05:40.683500   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:05:40.683570   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:05:40.693774   61267 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:05:40.753612   61267 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 01:05:40.753717   61267 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:05:40.911483   61267 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:05:40.911609   61267 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:05:40.911748   61267 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:05:41.170137   61267 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:05:41.161331   62747 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:05:41.161434   62747 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:05:41.165023   62747 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-617092"
	I0416 01:05:41.165044   62747 addons.go:69] Setting metrics-server=true in profile "embed-certs-617092"
	I0416 01:05:41.165081   62747 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-617092"
	I0416 01:05:41.165084   62747 addons.go:234] Setting addon metrics-server=true in "embed-certs-617092"
	W0416 01:05:41.165090   62747 addons.go:243] addon storage-provisioner should already be in state true
	W0416 01:05:41.165091   62747 addons.go:243] addon metrics-server should already be in state true
	I0416 01:05:41.165117   62747 host.go:66] Checking if "embed-certs-617092" exists ...
	I0416 01:05:41.165052   62747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:05:41.165025   62747 addons.go:69] Setting default-storageclass=true in profile "embed-certs-617092"
	I0416 01:05:41.165117   62747 host.go:66] Checking if "embed-certs-617092" exists ...
	I0416 01:05:41.165174   62747 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-617092"
	I0416 01:05:41.165464   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.165480   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.165549   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.165569   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.165549   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.165651   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.183063   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46083
	I0416 01:05:41.183551   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.184135   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.184158   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.184578   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.185298   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.185337   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.185763   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43633
	I0416 01:05:41.185823   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46197
	I0416 01:05:41.186233   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.186400   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.186701   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.186726   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.186861   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.186881   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.187211   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.187233   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.187415   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.187763   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.187781   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.191018   62747 addons.go:234] Setting addon default-storageclass=true in "embed-certs-617092"
	W0416 01:05:41.191038   62747 addons.go:243] addon default-storageclass should already be in state true
	I0416 01:05:41.191068   62747 host.go:66] Checking if "embed-certs-617092" exists ...
	I0416 01:05:41.191346   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.191384   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.202643   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45285
	I0416 01:05:41.203122   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.203607   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.203627   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.203952   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.204124   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.204325   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45643
	I0416 01:05:41.204721   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.205188   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.205207   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.205860   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.206056   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.206084   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:05:41.208051   62747 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 01:05:41.209179   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 01:05:41.209197   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 01:05:41.207724   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:05:41.209214   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:05:41.210728   62747 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:05:41.171860   61267 out.go:204]   - Generating certificates and keys ...
	I0416 01:05:41.171969   61267 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:05:41.172043   61267 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:05:41.172139   61267 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:05:41.172803   61267 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:05:41.173065   61267 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:05:41.173653   61267 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:05:41.174077   61267 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:05:41.174586   61267 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:05:41.175034   61267 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:05:41.175570   61267 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:05:41.175888   61267 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:05:41.175968   61267 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:05:41.439471   61267 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:05:41.524693   61267 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 01:05:42.001762   61267 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:05:42.139805   61267 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:05:42.198091   61267 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:05:42.198762   61267 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:05:42.202915   61267 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:05:42.204549   61267 out.go:204]   - Booting up control plane ...
	I0416 01:05:42.204673   61267 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:05:42.204816   61267 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:05:42.205761   61267 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:05:42.225187   61267 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:05:42.225917   61267 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:05:42.225972   61267 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:05:42.367087   61267 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:05:41.210575   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34385
	I0416 01:05:41.211905   62747 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:41.211923   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:05:41.211942   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:05:41.212835   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.212972   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.213577   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.213597   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.213610   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:05:41.213628   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.214039   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.214657   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.214693   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.215005   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:05:41.215635   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.215905   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:05:41.215933   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.216058   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:05:41.216109   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:05:41.216242   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:05:41.216303   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:05:41.216447   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:05:41.216466   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:05:41.216544   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:05:41.236284   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40007
	I0416 01:05:41.237670   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.238270   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.238288   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.241258   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.241453   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.243397   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:05:41.243724   62747 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:41.243740   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:05:41.243758   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:05:41.247426   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.248034   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:05:41.248144   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.248423   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:05:41.249376   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:05:41.249600   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:05:41.249799   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:05:41.414823   62747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:05:41.436007   62747 node_ready.go:35] waiting up to 6m0s for node "embed-certs-617092" to be "Ready" ...
	I0416 01:05:41.452344   62747 node_ready.go:49] node "embed-certs-617092" has status "Ready":"True"
	I0416 01:05:41.452370   62747 node_ready.go:38] duration metric: took 16.328329ms for node "embed-certs-617092" to be "Ready" ...
	I0416 01:05:41.452382   62747 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:41.467673   62747 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.477985   62747 pod_ready.go:92] pod "etcd-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:41.478019   62747 pod_ready.go:81] duration metric: took 10.312538ms for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.478032   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.485978   62747 pod_ready.go:92] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:41.486003   62747 pod_ready.go:81] duration metric: took 7.961029ms for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.486015   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.491586   62747 pod_ready.go:92] pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:41.491608   62747 pod_ready.go:81] duration metric: took 5.584682ms for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.491619   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p4rh9" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.591874   62747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:41.630528   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 01:05:41.630554   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 01:05:41.653822   62747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:41.718742   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 01:05:41.718775   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 01:05:41.750701   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:41.750725   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 01:05:41.798873   62747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:41.961373   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:41.961415   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:41.961857   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:41.961879   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:41.961890   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:41.961909   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:41.962200   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:41.962205   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:41.962216   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:41.974163   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:41.974189   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:41.974517   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:41.974537   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:42.721070   62747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.067206266s)
	I0416 01:05:42.721119   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:42.721130   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:42.721551   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:42.721594   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:42.721613   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:42.721636   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:42.721648   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:42.721972   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:42.721987   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:42.722006   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:43.123544   62747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.324616723s)
	I0416 01:05:43.123593   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:43.123608   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:43.123867   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:43.123906   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:43.123913   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:43.123922   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:43.123928   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:43.124218   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:43.124234   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:43.124234   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:43.124255   62747 addons.go:470] Verifying addon metrics-server=true in "embed-certs-617092"
	I0416 01:05:43.125829   62747 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0416 01:05:43.127138   62747 addons.go:505] duration metric: took 1.965815007s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0416 01:05:43.536374   62747 pod_ready.go:102] pod "kube-proxy-p4rh9" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:44.000571   62747 pod_ready.go:92] pod "kube-proxy-p4rh9" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:44.000594   62747 pod_ready.go:81] duration metric: took 2.508967748s for pod "kube-proxy-p4rh9" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:44.000603   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:44.006516   62747 pod_ready.go:92] pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:44.006540   62747 pod_ready.go:81] duration metric: took 5.930755ms for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:44.006546   62747 pod_ready.go:38] duration metric: took 2.554153393s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:44.006560   62747 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:05:44.006612   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:05:44.030705   62747 api_server.go:72] duration metric: took 2.869432993s to wait for apiserver process to appear ...
	I0416 01:05:44.030737   62747 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:05:44.030759   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:05:44.035576   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 200:
	ok
	I0416 01:05:44.037948   62747 api_server.go:141] control plane version: v1.29.3
	I0416 01:05:44.037973   62747 api_server.go:131] duration metric: took 7.228106ms to wait for apiserver health ...
	I0416 01:05:44.037983   62747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:05:44.044543   62747 system_pods.go:59] 9 kube-system pods found
	I0416 01:05:44.044574   62747 system_pods.go:61] "coredns-76f75df574-2q58l" [e9b9d000-738b-4110-8757-17f76197285c] Running
	I0416 01:05:44.044581   62747 system_pods.go:61] "coredns-76f75df574-h8k4k" [1b114848-1137-4215-a966-03db39e4de23] Running
	I0416 01:05:44.044586   62747 system_pods.go:61] "etcd-embed-certs-617092" [f65e9307-4e12-4ac4-baca-7e1cfd7415d5] Running
	I0416 01:05:44.044591   62747 system_pods.go:61] "kube-apiserver-embed-certs-617092" [f55e02ce-45cf-4f6e-b8d7-7f305f22ea52] Running
	I0416 01:05:44.044596   62747 system_pods.go:61] "kube-controller-manager-embed-certs-617092" [d16739c1-36f4-4748-8533-fcc6cea0adee] Running
	I0416 01:05:44.044601   62747 system_pods.go:61] "kube-proxy-p4rh9" [42041028-d085-4ec4-8213-da3af0d5290e] Running
	I0416 01:05:44.044606   62747 system_pods.go:61] "kube-scheduler-embed-certs-617092" [d61e24fe-a5e3-41bf-b212-75764a036a26] Running
	I0416 01:05:44.044614   62747 system_pods.go:61] "metrics-server-57f55c9bc5-j5clp" [99808b2d-344f-43b7-a29c-01f0a2026aa8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:44.044623   62747 system_pods.go:61] "storage-provisioner" [5a62c0f7-0b15-48f3-9c17-d5966d39fbd5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:44.044635   62747 system_pods.go:74] duration metric: took 6.6454ms to wait for pod list to return data ...
	I0416 01:05:44.044652   62747 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:05:44.241344   62747 default_sa.go:45] found service account: "default"
	I0416 01:05:44.241370   62747 default_sa.go:55] duration metric: took 196.710973ms for default service account to be created ...
	I0416 01:05:44.241379   62747 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:05:44.450798   62747 system_pods.go:86] 9 kube-system pods found
	I0416 01:05:44.450825   62747 system_pods.go:89] "coredns-76f75df574-2q58l" [e9b9d000-738b-4110-8757-17f76197285c] Running
	I0416 01:05:44.450831   62747 system_pods.go:89] "coredns-76f75df574-h8k4k" [1b114848-1137-4215-a966-03db39e4de23] Running
	I0416 01:05:44.450835   62747 system_pods.go:89] "etcd-embed-certs-617092" [f65e9307-4e12-4ac4-baca-7e1cfd7415d5] Running
	I0416 01:05:44.450839   62747 system_pods.go:89] "kube-apiserver-embed-certs-617092" [f55e02ce-45cf-4f6e-b8d7-7f305f22ea52] Running
	I0416 01:05:44.450844   62747 system_pods.go:89] "kube-controller-manager-embed-certs-617092" [d16739c1-36f4-4748-8533-fcc6cea0adee] Running
	I0416 01:05:44.450848   62747 system_pods.go:89] "kube-proxy-p4rh9" [42041028-d085-4ec4-8213-da3af0d5290e] Running
	I0416 01:05:44.450851   62747 system_pods.go:89] "kube-scheduler-embed-certs-617092" [d61e24fe-a5e3-41bf-b212-75764a036a26] Running
	I0416 01:05:44.450858   62747 system_pods.go:89] "metrics-server-57f55c9bc5-j5clp" [99808b2d-344f-43b7-a29c-01f0a2026aa8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:44.450864   62747 system_pods.go:89] "storage-provisioner" [5a62c0f7-0b15-48f3-9c17-d5966d39fbd5] Running
	I0416 01:05:44.450871   62747 system_pods.go:126] duration metric: took 209.487599ms to wait for k8s-apps to be running ...
	I0416 01:05:44.450889   62747 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:05:44.450943   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:44.470820   62747 system_svc.go:56] duration metric: took 19.925743ms WaitForService to wait for kubelet
	I0416 01:05:44.470853   62747 kubeadm.go:576] duration metric: took 3.309585995s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:05:44.470876   62747 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:05:44.642093   62747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:05:44.642123   62747 node_conditions.go:123] node cpu capacity is 2
	I0416 01:05:44.642135   62747 node_conditions.go:105] duration metric: took 171.253415ms to run NodePressure ...
	I0416 01:05:44.642149   62747 start.go:240] waiting for startup goroutines ...
	I0416 01:05:44.642158   62747 start.go:245] waiting for cluster config update ...
	I0416 01:05:44.642171   62747 start.go:254] writing updated cluster config ...
	I0416 01:05:44.642519   62747 ssh_runner.go:195] Run: rm -f paused
	I0416 01:05:44.707141   62747 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 01:05:44.709274   62747 out.go:177] * Done! kubectl is now configured to use "embed-certs-617092" cluster and "default" namespace by default
	I0416 01:05:48.372574   61267 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002543 seconds
	I0416 01:05:48.385076   61267 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 01:05:48.406058   61267 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 01:05:48.938329   61267 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 01:05:48.938556   61267 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-653942 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 01:05:49.458321   61267 kubeadm.go:309] [bootstrap-token] Using token: 5ddaoe.tvzldvzlkbeta1a9
	I0416 01:05:49.459891   61267 out.go:204]   - Configuring RBAC rules ...
	I0416 01:05:49.460064   61267 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 01:05:49.465799   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 01:05:49.477346   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 01:05:49.482154   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 01:05:49.485769   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 01:05:49.489199   61267 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 01:05:49.504774   61267 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 01:05:49.770133   61267 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 01:05:49.872777   61267 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 01:05:49.874282   61267 kubeadm.go:309] 
	I0416 01:05:49.874384   61267 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 01:05:49.874400   61267 kubeadm.go:309] 
	I0416 01:05:49.874560   61267 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 01:05:49.874580   61267 kubeadm.go:309] 
	I0416 01:05:49.874602   61267 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 01:05:49.874673   61267 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 01:05:49.874754   61267 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 01:05:49.874766   61267 kubeadm.go:309] 
	I0416 01:05:49.874853   61267 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 01:05:49.874878   61267 kubeadm.go:309] 
	I0416 01:05:49.874944   61267 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 01:05:49.874956   61267 kubeadm.go:309] 
	I0416 01:05:49.875019   61267 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 01:05:49.875141   61267 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 01:05:49.875246   61267 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 01:05:49.875257   61267 kubeadm.go:309] 
	I0416 01:05:49.875432   61267 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 01:05:49.875552   61267 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 01:05:49.875562   61267 kubeadm.go:309] 
	I0416 01:05:49.875657   61267 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 5ddaoe.tvzldvzlkbeta1a9 \
	I0416 01:05:49.875754   61267 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0416 01:05:49.875774   61267 kubeadm.go:309] 	--control-plane 
	I0416 01:05:49.875780   61267 kubeadm.go:309] 
	I0416 01:05:49.875859   61267 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 01:05:49.875869   61267 kubeadm.go:309] 
	I0416 01:05:49.875949   61267 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 5ddaoe.tvzldvzlkbeta1a9 \
	I0416 01:05:49.876085   61267 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0416 01:05:49.876640   61267 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:05:49.876666   61267 cni.go:84] Creating CNI manager for ""
	I0416 01:05:49.876676   61267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:05:49.878703   61267 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:05:49.880070   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:05:49.897752   61267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:05:49.969146   61267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 01:05:49.969228   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:49.969228   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-653942 minikube.k8s.io/updated_at=2024_04_16T01_05_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=default-k8s-diff-port-653942 minikube.k8s.io/primary=true
	I0416 01:05:50.233119   61267 ops.go:34] apiserver oom_adj: -16
	I0416 01:05:50.233262   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:50.733748   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:51.234361   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:51.733704   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:52.233367   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:52.733789   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:53.234012   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:53.733458   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:54.233341   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:54.734148   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:55.233710   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:55.734135   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:56.233315   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:56.734162   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:57.233899   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:57.733337   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:58.234101   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:58.734357   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:59.233831   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:59.733286   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:00.233847   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:00.733872   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:01.233935   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:01.733629   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:02.233967   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:02.734163   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:03.233294   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:03.412834   61267 kubeadm.go:1107] duration metric: took 13.44368469s to wait for elevateKubeSystemPrivileges
	W0416 01:06:03.412896   61267 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:06:03.412907   61267 kubeadm.go:393] duration metric: took 5m17.8108087s to StartCluster
	I0416 01:06:03.412926   61267 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:06:03.413003   61267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:06:03.414974   61267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:06:03.415299   61267 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.216 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:06:03.417148   61267 out.go:177] * Verifying Kubernetes components...
	I0416 01:06:03.415390   61267 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:06:03.415510   61267 config.go:182] Loaded profile config "default-k8s-diff-port-653942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:06:03.417238   61267 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-653942"
	I0416 01:06:03.419134   61267 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-653942"
	W0416 01:06:03.419147   61267 addons.go:243] addon storage-provisioner should already be in state true
	I0416 01:06:03.417247   61267 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-653942"
	I0416 01:06:03.419188   61267 host.go:66] Checking if "default-k8s-diff-port-653942" exists ...
	I0416 01:06:03.419214   61267 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-653942"
	I0416 01:06:03.417245   61267 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-653942"
	I0416 01:06:03.419095   61267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0416 01:06:03.419262   61267 addons.go:243] addon metrics-server should already be in state true
	I0416 01:06:03.419307   61267 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-653942"
	I0416 01:06:03.419327   61267 host.go:66] Checking if "default-k8s-diff-port-653942" exists ...
	I0416 01:06:03.419606   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.419644   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.419662   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.419698   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.419722   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.419756   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.435784   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
	I0416 01:06:03.435800   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37505
	I0416 01:06:03.436294   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.436296   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.436811   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.436838   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.437097   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.437115   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.437203   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.437683   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.437757   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.437790   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.438213   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33329
	I0416 01:06:03.438248   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.438273   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.438786   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.439301   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.439332   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.439810   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.440162   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.443879   61267 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-653942"
	W0416 01:06:03.443906   61267 addons.go:243] addon default-storageclass should already be in state true
	I0416 01:06:03.443941   61267 host.go:66] Checking if "default-k8s-diff-port-653942" exists ...
	I0416 01:06:03.444301   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.444340   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.454673   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I0416 01:06:03.455111   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.455715   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.455742   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.456116   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.456318   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.457870   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39341
	I0416 01:06:03.458086   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:06:03.458278   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.462516   61267 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 01:06:03.458862   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.460354   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0416 01:06:03.464491   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 01:06:03.464509   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 01:06:03.464529   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:06:03.464551   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.464960   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.465281   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.465552   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.466181   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.466205   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.466760   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.467410   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.467435   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.467638   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:06:03.469647   61267 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:06:03.471009   61267 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:06:03.471024   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:06:03.469242   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.471040   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:06:03.469767   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:06:03.471070   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:06:03.471133   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.471297   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:06:03.471478   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:06:03.471661   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:06:03.473778   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.474203   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:06:03.474226   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.474421   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:06:03.474605   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:06:03.474784   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:06:03.474958   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:06:03.485829   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0416 01:06:03.486293   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.486876   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.486900   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.487362   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.487535   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.489207   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:06:03.489529   61267 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:06:03.489549   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:06:03.489568   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:06:03.492570   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.492932   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:06:03.492958   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.493224   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:06:03.493379   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:06:03.493557   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:06:03.493673   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:06:03.680085   61267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:06:03.724011   61267 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-653942" to be "Ready" ...
	I0416 01:06:03.739131   61267 node_ready.go:49] node "default-k8s-diff-port-653942" has status "Ready":"True"
	I0416 01:06:03.739152   61267 node_ready.go:38] duration metric: took 15.111832ms for node "default-k8s-diff-port-653942" to be "Ready" ...
	I0416 01:06:03.739161   61267 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:06:03.748081   61267 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-5nnpv" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:03.810063   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 01:06:03.810090   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 01:06:03.812595   61267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:06:03.848165   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 01:06:03.848187   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 01:06:03.991110   61267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:06:03.997100   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:06:03.997133   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 01:06:04.093267   61267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:06:04.349978   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:04.350011   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:04.350336   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:04.350396   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:04.350415   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:04.350420   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:04.350425   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:04.350683   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:04.350699   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:04.416648   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:04.416674   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:04.416982   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:04.417001   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.206973   61267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113663167s)
	I0416 01:06:05.207025   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207040   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207039   61267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.215892308s)
	I0416 01:06:05.207078   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207090   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207371   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.207388   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.207397   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207405   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207445   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.207462   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:05.207466   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.207490   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207508   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207610   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.207644   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.207654   61267 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-653942"
	I0416 01:06:05.207654   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:05.209411   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:05.209402   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.209469   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.212071   61267 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0416 01:06:05.213412   61267 addons.go:505] duration metric: took 1.798038731s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0416 01:06:05.256497   61267 pod_ready.go:92] pod "coredns-76f75df574-5nnpv" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.256526   61267 pod_ready.go:81] duration metric: took 1.508419977s for pod "coredns-76f75df574-5nnpv" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.256538   61267 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-zpnhs" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.262092   61267 pod_ready.go:92] pod "coredns-76f75df574-zpnhs" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.262112   61267 pod_ready.go:81] duration metric: took 5.566499ms for pod "coredns-76f75df574-zpnhs" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.262121   61267 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.267256   61267 pod_ready.go:92] pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.267278   61267 pod_ready.go:81] duration metric: took 5.149782ms for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.267286   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.272119   61267 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.272144   61267 pod_ready.go:81] duration metric: took 4.851008ms for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.272155   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.328440   61267 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.328470   61267 pod_ready.go:81] duration metric: took 56.30531ms for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.328482   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mg5km" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.729518   61267 pod_ready.go:92] pod "kube-proxy-mg5km" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.729544   61267 pod_ready.go:81] duration metric: took 401.055058ms for pod "kube-proxy-mg5km" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.729553   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:06.127535   61267 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:06.127558   61267 pod_ready.go:81] duration metric: took 397.998988ms for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:06.127565   61267 pod_ready.go:38] duration metric: took 2.388395448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:06:06.127577   61267 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:06:06.127620   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:06:06.150179   61267 api_server.go:72] duration metric: took 2.734842767s to wait for apiserver process to appear ...
	I0416 01:06:06.150208   61267 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:06:06.150226   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:06:06.154310   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 200:
	ok
	I0416 01:06:06.155393   61267 api_server.go:141] control plane version: v1.29.3
	I0416 01:06:06.155409   61267 api_server.go:131] duration metric: took 5.194458ms to wait for apiserver health ...
	I0416 01:06:06.155421   61267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:06:06.333873   61267 system_pods.go:59] 9 kube-system pods found
	I0416 01:06:06.333909   61267 system_pods.go:61] "coredns-76f75df574-5nnpv" [3350aca5-639e-44a1-bd84-d1e4b6486143] Running
	I0416 01:06:06.333914   61267 system_pods.go:61] "coredns-76f75df574-zpnhs" [990672b6-bb3a-4f91-8de7-7c2ec224c94a] Running
	I0416 01:06:06.333917   61267 system_pods.go:61] "etcd-default-k8s-diff-port-653942" [e72e89e9-c274-4d4d-b1f9-43bea95cd015] Running
	I0416 01:06:06.333920   61267 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653942" [c1652126-b4c2-41cf-a574-9784f7800374] Running
	I0416 01:06:06.333923   61267 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653942" [1f43936c-ba39-44f9-b9b7-2a149f26a880] Running
	I0416 01:06:06.333926   61267 system_pods.go:61] "kube-proxy-mg5km" [74764194-1f31-40b1-90b5-497e248ab7da] Running
	I0416 01:06:06.333929   61267 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653942" [48058ade-c30d-4dc9-b6c0-32b2ed5fc88a] Running
	I0416 01:06:06.333935   61267 system_pods.go:61] "metrics-server-57f55c9bc5-6jn29" [1eec2ffb-ce59-45cb-b6b4-cd010549510e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:06:06.333938   61267 system_pods.go:61] "storage-provisioner" [d131c1fc-9124-4b46-a16f-a8fb5029a57b] Running
	I0416 01:06:06.333947   61267 system_pods.go:74] duration metric: took 178.520515ms to wait for pod list to return data ...
	I0416 01:06:06.333953   61267 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:06:06.528119   61267 default_sa.go:45] found service account: "default"
	I0416 01:06:06.528148   61267 default_sa.go:55] duration metric: took 194.18786ms for default service account to be created ...
	I0416 01:06:06.528158   61267 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:06:06.731573   61267 system_pods.go:86] 9 kube-system pods found
	I0416 01:06:06.731600   61267 system_pods.go:89] "coredns-76f75df574-5nnpv" [3350aca5-639e-44a1-bd84-d1e4b6486143] Running
	I0416 01:06:06.731606   61267 system_pods.go:89] "coredns-76f75df574-zpnhs" [990672b6-bb3a-4f91-8de7-7c2ec224c94a] Running
	I0416 01:06:06.731610   61267 system_pods.go:89] "etcd-default-k8s-diff-port-653942" [e72e89e9-c274-4d4d-b1f9-43bea95cd015] Running
	I0416 01:06:06.731614   61267 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-653942" [c1652126-b4c2-41cf-a574-9784f7800374] Running
	I0416 01:06:06.731619   61267 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-653942" [1f43936c-ba39-44f9-b9b7-2a149f26a880] Running
	I0416 01:06:06.731622   61267 system_pods.go:89] "kube-proxy-mg5km" [74764194-1f31-40b1-90b5-497e248ab7da] Running
	I0416 01:06:06.731626   61267 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-653942" [48058ade-c30d-4dc9-b6c0-32b2ed5fc88a] Running
	I0416 01:06:06.731633   61267 system_pods.go:89] "metrics-server-57f55c9bc5-6jn29" [1eec2ffb-ce59-45cb-b6b4-cd010549510e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:06:06.731638   61267 system_pods.go:89] "storage-provisioner" [d131c1fc-9124-4b46-a16f-a8fb5029a57b] Running
	I0416 01:06:06.731649   61267 system_pods.go:126] duration metric: took 203.485273ms to wait for k8s-apps to be running ...
	I0416 01:06:06.731659   61267 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:06:06.731700   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:06:06.749013   61267 system_svc.go:56] duration metric: took 17.343008ms WaitForService to wait for kubelet
	I0416 01:06:06.749048   61267 kubeadm.go:576] duration metric: took 3.333716529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:06:06.749072   61267 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:06:06.927701   61267 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:06:06.927725   61267 node_conditions.go:123] node cpu capacity is 2
	I0416 01:06:06.927735   61267 node_conditions.go:105] duration metric: took 178.65899ms to run NodePressure ...
	I0416 01:06:06.927746   61267 start.go:240] waiting for startup goroutines ...
	I0416 01:06:06.927754   61267 start.go:245] waiting for cluster config update ...
	I0416 01:06:06.927763   61267 start.go:254] writing updated cluster config ...
	I0416 01:06:06.928000   61267 ssh_runner.go:195] Run: rm -f paused
	I0416 01:06:06.978823   61267 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 01:06:06.981011   61267 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-653942" cluster and "default" namespace by default
	I0416 01:06:14.261576   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:06:14.261834   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:06:14.261849   62139 kubeadm.go:309] 
	I0416 01:06:14.261890   62139 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 01:06:14.261973   62139 kubeadm.go:309] 		timed out waiting for the condition
	I0416 01:06:14.262006   62139 kubeadm.go:309] 
	I0416 01:06:14.262051   62139 kubeadm.go:309] 	This error is likely caused by:
	I0416 01:06:14.262082   62139 kubeadm.go:309] 		- The kubelet is not running
	I0416 01:06:14.262174   62139 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 01:06:14.262199   62139 kubeadm.go:309] 
	I0416 01:06:14.262357   62139 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 01:06:14.262414   62139 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 01:06:14.262471   62139 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 01:06:14.262481   62139 kubeadm.go:309] 
	I0416 01:06:14.262610   62139 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 01:06:14.262707   62139 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 01:06:14.262717   62139 kubeadm.go:309] 
	I0416 01:06:14.262867   62139 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 01:06:14.263010   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 01:06:14.263142   62139 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 01:06:14.263211   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 01:06:14.263234   62139 kubeadm.go:309] 
	I0416 01:06:14.264084   62139 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:06:14.264204   62139 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 01:06:14.264312   62139 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0416 01:06:14.264460   62139 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0416 01:06:14.264526   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:06:15.653692   62139 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.389136497s)
	I0416 01:06:15.653831   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:06:15.669141   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:06:15.679485   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:06:15.679511   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:06:15.679556   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:06:15.689898   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:06:15.689974   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:06:15.700563   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:06:15.710363   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:06:15.710445   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:06:15.719877   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:06:15.728947   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:06:15.729002   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:06:15.739360   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:06:15.749479   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:06:15.749557   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:06:15.760930   62139 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:06:16.000974   62139 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:08:12.327133   62139 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 01:08:12.327246   62139 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0416 01:08:12.328995   62139 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 01:08:12.329092   62139 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:08:12.329220   62139 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:08:12.329302   62139 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:08:12.329440   62139 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:08:12.329537   62139 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:08:12.331381   62139 out.go:204]   - Generating certificates and keys ...
	I0416 01:08:12.331474   62139 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:08:12.331558   62139 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:08:12.331658   62139 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:08:12.331742   62139 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:08:12.331830   62139 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:08:12.331910   62139 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:08:12.331968   62139 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:08:12.332020   62139 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:08:12.332085   62139 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:08:12.332159   62139 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:08:12.332210   62139 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:08:12.332297   62139 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:08:12.332376   62139 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:08:12.332466   62139 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:08:12.332547   62139 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:08:12.332642   62139 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:08:12.332790   62139 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:08:12.332895   62139 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:08:12.332938   62139 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:08:12.333002   62139 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:08:12.334632   62139 out.go:204]   - Booting up control plane ...
	I0416 01:08:12.334737   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:08:12.334837   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:08:12.334928   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:08:12.335009   62139 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:08:12.335162   62139 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:08:12.335241   62139 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 01:08:12.335333   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.335541   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.335613   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.335771   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.335848   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336035   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336109   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336365   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336438   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336704   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336716   62139 kubeadm.go:309] 
	I0416 01:08:12.336779   62139 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 01:08:12.336827   62139 kubeadm.go:309] 		timed out waiting for the condition
	I0416 01:08:12.336834   62139 kubeadm.go:309] 
	I0416 01:08:12.336883   62139 kubeadm.go:309] 	This error is likely caused by:
	I0416 01:08:12.336922   62139 kubeadm.go:309] 		- The kubelet is not running
	I0416 01:08:12.337025   62139 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 01:08:12.337036   62139 kubeadm.go:309] 
	I0416 01:08:12.337145   62139 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 01:08:12.337211   62139 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 01:08:12.337245   62139 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 01:08:12.337253   62139 kubeadm.go:309] 
	I0416 01:08:12.337340   62139 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 01:08:12.337428   62139 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 01:08:12.337436   62139 kubeadm.go:309] 
	I0416 01:08:12.337529   62139 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 01:08:12.337602   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 01:08:12.337701   62139 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 01:08:12.337870   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 01:08:12.337957   62139 kubeadm.go:393] duration metric: took 8m4.174818047s to StartCluster
	I0416 01:08:12.337969   62139 kubeadm.go:309] 
	I0416 01:08:12.338009   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:08:12.338067   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:08:12.391937   62139 cri.go:89] found id: ""
	I0416 01:08:12.391963   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.391986   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:08:12.391994   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:08:12.392072   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:08:12.430575   62139 cri.go:89] found id: ""
	I0416 01:08:12.430602   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.430616   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:08:12.430623   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:08:12.430685   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:08:12.469115   62139 cri.go:89] found id: ""
	I0416 01:08:12.469143   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.469152   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:08:12.469173   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:08:12.469228   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:08:12.508599   62139 cri.go:89] found id: ""
	I0416 01:08:12.508630   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.508640   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:08:12.508648   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:08:12.508698   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:08:12.547785   62139 cri.go:89] found id: ""
	I0416 01:08:12.547817   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.547829   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:08:12.547836   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:08:12.547910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:08:12.599526   62139 cri.go:89] found id: ""
	I0416 01:08:12.599549   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.599557   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:08:12.599563   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:08:12.599612   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:08:12.639914   62139 cri.go:89] found id: ""
	I0416 01:08:12.639944   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.639954   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:08:12.639962   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:08:12.640041   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:08:12.676025   62139 cri.go:89] found id: ""
	I0416 01:08:12.676057   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.676066   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:08:12.676079   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:08:12.676100   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:08:12.774744   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:08:12.774769   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:08:12.774785   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:08:12.902751   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:08:12.902787   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:08:12.947370   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:08:12.947406   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:08:13.002186   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:08:13.002223   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0416 01:08:13.017193   62139 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0416 01:08:13.017234   62139 out.go:239] * 
	W0416 01:08:13.017283   62139 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 01:08:13.017304   62139 out.go:239] * 
	W0416 01:08:13.018151   62139 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 01:08:13.021371   62139 out.go:177] 
	W0416 01:08:13.022572   62139 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 01:08:13.022640   62139 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0416 01:08:13.022670   62139 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0416 01:08:13.024248   62139 out.go:177] 
	
	
	==> CRI-O <==
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.241327092Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230056241291248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99978,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=616ad646-0725-4041-8d5f-15338c1eaa6b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.241917854Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51cb96e9-0baa-4df7-84f4-198bb2e96c9d name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.241981308Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51cb96e9-0baa-4df7-84f4-198bb2e96c9d name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.242180912Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4084ca3da80ddb16e306dcabb7c20593f8e97f33727b62127f188994ad25adde,PodSandboxId:7c533eb612bcfeb3e328c5ebae02e2433479a2a2952017e65215e6900b611a08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229513798457587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9ac9c93-0e50-4598-a9c4-a12e4ff14063,},Annotations:map[string]string{io.kubernetes.container.hash: 34478f80,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04bf17c2ef31ccb5b6baa0c6ca8f18c10429b2b41a05562d35cb3e7624d425b0,PodSandboxId:a45b6f26d15ebd8db3ab9a3dd6dbc66fb2fd1a68b1fbc3d725f502fec0621958,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229513484300883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p62sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36768eb2-2a22-48e1-b271-f262aa64e014,},Annotations:map[string]string{io.kubernetes.container.hash: 83dc39b1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f47059ece58130b68e11d0faa581f7b336a91743d8365eb9dac84da7aff6d0,PodSandboxId:8b22609e17e2d4ddf269e30e8ed22ed2d44a4e963d9dc02bd3f00230bf122ea8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229513380146870,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2b5ht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8
d48a4c-6efd-409a-98be-3ec5bf639470,},Annotations:map[string]string{io.kubernetes.container.hash: 9eeac67b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b028f56375a963d3290fb3ca9532c765c119e48afb0243eca99823653744037,PodSandboxId:dd56032ee4f5665a6f2e99c9745b37a3e5121edf119f16172b34294b98f3f297,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:
1713229512641915406,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cjlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4d9303-8c08-4385-a6b9-63dda0d9a274,},Annotations:map[string]string{io.kubernetes.container.hash: efc90ea9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c03a58ac3d73ae22e09299e52c1b1cea91a8f532c2e83efe6db26b0cf0fcd9b6,PodSandboxId:fd7e62b052cdc105a5b72766b74dbb31fe56ea113ab1de728e762a29619ee05c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713229493072332354,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18a9fc8e3ab1c697889b081fefcfa178,},Annotations:map[string]string{io.kubernetes.container.hash: 258756dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b895d3cc11f002a93248562e7f584ce2a601717f3f84b5c05201ece6ad116e4e,PodSandboxId:98fbeb3707a7aa7fcd2de3d49281c8543a900ae8b7715291526911fb3b9d1feb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713229493057979886,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7184d6eaf0b500beb6b7cea960d1905,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460245770a3129b38002697725b12fa2bbe8dba33dd691938054fb6a1cb63f63,PodSandboxId:4ef695f642f6c7a59106829ebc80d9c5ba1aa4a23e501216d835de606105ccd1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713229493027192998,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d890064dc3b1b546cf082926e2564845,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cece99507aaa37e50274debcdc6deb652e50017ded358129d85704ff474638e,PodSandboxId:eb4d3cf79f6902ec327d2bcc3aa599c22fc15ab12f64bbecf47654ceb96365e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713229492979142873,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cbdec3ea486f052b898933d08258cc4,},Annotations:map[string]string{io.kubernetes.container.hash: ebb91f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51cb96e9-0baa-4df7-84f4-198bb2e96c9d name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.283831428Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b4a74332-4675-4fd5-85bc-915585bd3d67 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.284029248Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b4a74332-4675-4fd5-85bc-915585bd3d67 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.285714164Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8589c55b-fe05-41aa-aa73-d286e539c460 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.286108481Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230056286080309,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99978,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8589c55b-fe05-41aa-aa73-d286e539c460 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.286958098Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=849a8945-4de2-485f-ac8b-6cae59a21d58 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.287014599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=849a8945-4de2-485f-ac8b-6cae59a21d58 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.287191721Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4084ca3da80ddb16e306dcabb7c20593f8e97f33727b62127f188994ad25adde,PodSandboxId:7c533eb612bcfeb3e328c5ebae02e2433479a2a2952017e65215e6900b611a08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229513798457587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9ac9c93-0e50-4598-a9c4-a12e4ff14063,},Annotations:map[string]string{io.kubernetes.container.hash: 34478f80,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04bf17c2ef31ccb5b6baa0c6ca8f18c10429b2b41a05562d35cb3e7624d425b0,PodSandboxId:a45b6f26d15ebd8db3ab9a3dd6dbc66fb2fd1a68b1fbc3d725f502fec0621958,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229513484300883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p62sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36768eb2-2a22-48e1-b271-f262aa64e014,},Annotations:map[string]string{io.kubernetes.container.hash: 83dc39b1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f47059ece58130b68e11d0faa581f7b336a91743d8365eb9dac84da7aff6d0,PodSandboxId:8b22609e17e2d4ddf269e30e8ed22ed2d44a4e963d9dc02bd3f00230bf122ea8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229513380146870,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2b5ht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8
d48a4c-6efd-409a-98be-3ec5bf639470,},Annotations:map[string]string{io.kubernetes.container.hash: 9eeac67b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b028f56375a963d3290fb3ca9532c765c119e48afb0243eca99823653744037,PodSandboxId:dd56032ee4f5665a6f2e99c9745b37a3e5121edf119f16172b34294b98f3f297,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:
1713229512641915406,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cjlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4d9303-8c08-4385-a6b9-63dda0d9a274,},Annotations:map[string]string{io.kubernetes.container.hash: efc90ea9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c03a58ac3d73ae22e09299e52c1b1cea91a8f532c2e83efe6db26b0cf0fcd9b6,PodSandboxId:fd7e62b052cdc105a5b72766b74dbb31fe56ea113ab1de728e762a29619ee05c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713229493072332354,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18a9fc8e3ab1c697889b081fefcfa178,},Annotations:map[string]string{io.kubernetes.container.hash: 258756dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b895d3cc11f002a93248562e7f584ce2a601717f3f84b5c05201ece6ad116e4e,PodSandboxId:98fbeb3707a7aa7fcd2de3d49281c8543a900ae8b7715291526911fb3b9d1feb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713229493057979886,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7184d6eaf0b500beb6b7cea960d1905,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460245770a3129b38002697725b12fa2bbe8dba33dd691938054fb6a1cb63f63,PodSandboxId:4ef695f642f6c7a59106829ebc80d9c5ba1aa4a23e501216d835de606105ccd1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713229493027192998,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d890064dc3b1b546cf082926e2564845,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cece99507aaa37e50274debcdc6deb652e50017ded358129d85704ff474638e,PodSandboxId:eb4d3cf79f6902ec327d2bcc3aa599c22fc15ab12f64bbecf47654ceb96365e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713229492979142873,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cbdec3ea486f052b898933d08258cc4,},Annotations:map[string]string{io.kubernetes.container.hash: ebb91f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=849a8945-4de2-485f-ac8b-6cae59a21d58 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.328345848Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d8cd697d-9186-49b4-a737-b8591d0396df name=/runtime.v1.RuntimeService/Version
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.328424741Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d8cd697d-9186-49b4-a737-b8591d0396df name=/runtime.v1.RuntimeService/Version
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.331687196Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=be799b27-983f-4a42-b979-8437b4912ae4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.334850656Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230056334820969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99978,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=be799b27-983f-4a42-b979-8437b4912ae4 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.336202202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44ae0dba-3a81-426e-8e30-e7ecac238346 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.336274960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44ae0dba-3a81-426e-8e30-e7ecac238346 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.336438651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4084ca3da80ddb16e306dcabb7c20593f8e97f33727b62127f188994ad25adde,PodSandboxId:7c533eb612bcfeb3e328c5ebae02e2433479a2a2952017e65215e6900b611a08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229513798457587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9ac9c93-0e50-4598-a9c4-a12e4ff14063,},Annotations:map[string]string{io.kubernetes.container.hash: 34478f80,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04bf17c2ef31ccb5b6baa0c6ca8f18c10429b2b41a05562d35cb3e7624d425b0,PodSandboxId:a45b6f26d15ebd8db3ab9a3dd6dbc66fb2fd1a68b1fbc3d725f502fec0621958,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229513484300883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p62sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36768eb2-2a22-48e1-b271-f262aa64e014,},Annotations:map[string]string{io.kubernetes.container.hash: 83dc39b1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f47059ece58130b68e11d0faa581f7b336a91743d8365eb9dac84da7aff6d0,PodSandboxId:8b22609e17e2d4ddf269e30e8ed22ed2d44a4e963d9dc02bd3f00230bf122ea8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229513380146870,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2b5ht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8
d48a4c-6efd-409a-98be-3ec5bf639470,},Annotations:map[string]string{io.kubernetes.container.hash: 9eeac67b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b028f56375a963d3290fb3ca9532c765c119e48afb0243eca99823653744037,PodSandboxId:dd56032ee4f5665a6f2e99c9745b37a3e5121edf119f16172b34294b98f3f297,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:
1713229512641915406,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cjlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4d9303-8c08-4385-a6b9-63dda0d9a274,},Annotations:map[string]string{io.kubernetes.container.hash: efc90ea9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c03a58ac3d73ae22e09299e52c1b1cea91a8f532c2e83efe6db26b0cf0fcd9b6,PodSandboxId:fd7e62b052cdc105a5b72766b74dbb31fe56ea113ab1de728e762a29619ee05c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713229493072332354,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18a9fc8e3ab1c697889b081fefcfa178,},Annotations:map[string]string{io.kubernetes.container.hash: 258756dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b895d3cc11f002a93248562e7f584ce2a601717f3f84b5c05201ece6ad116e4e,PodSandboxId:98fbeb3707a7aa7fcd2de3d49281c8543a900ae8b7715291526911fb3b9d1feb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713229493057979886,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7184d6eaf0b500beb6b7cea960d1905,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460245770a3129b38002697725b12fa2bbe8dba33dd691938054fb6a1cb63f63,PodSandboxId:4ef695f642f6c7a59106829ebc80d9c5ba1aa4a23e501216d835de606105ccd1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713229493027192998,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d890064dc3b1b546cf082926e2564845,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cece99507aaa37e50274debcdc6deb652e50017ded358129d85704ff474638e,PodSandboxId:eb4d3cf79f6902ec327d2bcc3aa599c22fc15ab12f64bbecf47654ceb96365e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713229492979142873,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cbdec3ea486f052b898933d08258cc4,},Annotations:map[string]string{io.kubernetes.container.hash: ebb91f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44ae0dba-3a81-426e-8e30-e7ecac238346 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.375906588Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ea78acc4-5275-476e-90a1-beaf0940e5a7 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.375996663Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea78acc4-5275-476e-90a1-beaf0940e5a7 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.378082956Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=274bbf92-f3aa-425a-a712-379286ad8c1a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.378635555Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230056378607809,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99978,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=274bbf92-f3aa-425a-a712-379286ad8c1a name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.379344085Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47252246-b401-4dd1-adb9-0bdcbafb2564 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.379418516Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47252246-b401-4dd1-adb9-0bdcbafb2564 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:16 no-preload-572602 crio[722]: time="2024-04-16 01:14:16.379751209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4084ca3da80ddb16e306dcabb7c20593f8e97f33727b62127f188994ad25adde,PodSandboxId:7c533eb612bcfeb3e328c5ebae02e2433479a2a2952017e65215e6900b611a08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229513798457587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9ac9c93-0e50-4598-a9c4-a12e4ff14063,},Annotations:map[string]string{io.kubernetes.container.hash: 34478f80,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04bf17c2ef31ccb5b6baa0c6ca8f18c10429b2b41a05562d35cb3e7624d425b0,PodSandboxId:a45b6f26d15ebd8db3ab9a3dd6dbc66fb2fd1a68b1fbc3d725f502fec0621958,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229513484300883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p62sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36768eb2-2a22-48e1-b271-f262aa64e014,},Annotations:map[string]string{io.kubernetes.container.hash: 83dc39b1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f47059ece58130b68e11d0faa581f7b336a91743d8365eb9dac84da7aff6d0,PodSandboxId:8b22609e17e2d4ddf269e30e8ed22ed2d44a4e963d9dc02bd3f00230bf122ea8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229513380146870,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2b5ht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8
d48a4c-6efd-409a-98be-3ec5bf639470,},Annotations:map[string]string{io.kubernetes.container.hash: 9eeac67b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b028f56375a963d3290fb3ca9532c765c119e48afb0243eca99823653744037,PodSandboxId:dd56032ee4f5665a6f2e99c9745b37a3e5121edf119f16172b34294b98f3f297,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:
1713229512641915406,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cjlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4d9303-8c08-4385-a6b9-63dda0d9a274,},Annotations:map[string]string{io.kubernetes.container.hash: efc90ea9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c03a58ac3d73ae22e09299e52c1b1cea91a8f532c2e83efe6db26b0cf0fcd9b6,PodSandboxId:fd7e62b052cdc105a5b72766b74dbb31fe56ea113ab1de728e762a29619ee05c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713229493072332354,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18a9fc8e3ab1c697889b081fefcfa178,},Annotations:map[string]string{io.kubernetes.container.hash: 258756dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b895d3cc11f002a93248562e7f584ce2a601717f3f84b5c05201ece6ad116e4e,PodSandboxId:98fbeb3707a7aa7fcd2de3d49281c8543a900ae8b7715291526911fb3b9d1feb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713229493057979886,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7184d6eaf0b500beb6b7cea960d1905,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460245770a3129b38002697725b12fa2bbe8dba33dd691938054fb6a1cb63f63,PodSandboxId:4ef695f642f6c7a59106829ebc80d9c5ba1aa4a23e501216d835de606105ccd1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713229493027192998,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d890064dc3b1b546cf082926e2564845,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cece99507aaa37e50274debcdc6deb652e50017ded358129d85704ff474638e,PodSandboxId:eb4d3cf79f6902ec327d2bcc3aa599c22fc15ab12f64bbecf47654ceb96365e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713229492979142873,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cbdec3ea486f052b898933d08258cc4,},Annotations:map[string]string{io.kubernetes.container.hash: ebb91f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47252246-b401-4dd1-adb9-0bdcbafb2564 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4084ca3da80dd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   7c533eb612bcf       storage-provisioner
	04bf17c2ef31c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   a45b6f26d15eb       coredns-7db6d8ff4d-p62sn
	92f47059ece58       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   8b22609e17e2d       coredns-7db6d8ff4d-2b5ht
	4b028f56375a9       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e   9 minutes ago       Running             kube-proxy                0                   dd56032ee4f56       kube-proxy-6cjlc
	c03a58ac3d73a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   fd7e62b052cdc       etcd-no-preload-572602
	b895d3cc11f00       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6   9 minutes ago       Running             kube-scheduler            2                   98fbeb3707a7a       kube-scheduler-no-preload-572602
	460245770a312       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b   9 minutes ago       Running             kube-controller-manager   2                   4ef695f642f6c       kube-controller-manager-no-preload-572602
	8cece99507aaa       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1   9 minutes ago       Running             kube-apiserver            2                   eb4d3cf79f690       kube-apiserver-no-preload-572602
	
	
	==> coredns [04bf17c2ef31ccb5b6baa0c6ca8f18c10429b2b41a05562d35cb3e7624d425b0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [92f47059ece58130b68e11d0faa581f7b336a91743d8365eb9dac84da7aff6d0] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-572602
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-572602
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=no-preload-572602
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T01_04_59_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 01:04:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-572602
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 01:14:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 01:10:24 +0000   Tue, 16 Apr 2024 01:04:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 01:10:24 +0000   Tue, 16 Apr 2024 01:04:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 01:10:24 +0000   Tue, 16 Apr 2024 01:04:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 01:10:24 +0000   Tue, 16 Apr 2024 01:04:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    no-preload-572602
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 50d747e62b974d8286588f595ee1d471
	  System UUID:                50d747e6-2b97-4d82-8658-8f595ee1d471
	  Boot ID:                    10322727-cc02-48ec-b8d2-a3f54c053fd9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-2b5ht                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m4s
	  kube-system                 coredns-7db6d8ff4d-p62sn                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m4s
	  kube-system                 etcd-no-preload-572602                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-no-preload-572602             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-controller-manager-no-preload-572602    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-6cjlc                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	  kube-system                 kube-scheduler-no-preload-572602             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 metrics-server-569cc877fc-5j5rc              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m3s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node no-preload-572602 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node no-preload-572602 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m24s (x7 over 9m24s)  kubelet          Node no-preload-572602 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s                  kubelet          Node no-preload-572602 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s                  kubelet          Node no-preload-572602 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s                  kubelet          Node no-preload-572602 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m5s                   node-controller  Node no-preload-572602 event: Registered Node no-preload-572602 in Controller
	
	
	==> dmesg <==
	[  +0.040586] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.527107] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.695817] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.626978] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.483712] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.061192] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057831] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.192698] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.135403] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.299347] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[ +16.182072] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.068039] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.474627] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[Apr16 01:00] kauditd_printk_skb: 100 callbacks suppressed
	[  +7.322466] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.834560] kauditd_printk_skb: 24 callbacks suppressed
	[Apr16 01:04] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.489037] systemd-fstab-generator[4022]: Ignoring "noauto" option for root device
	[  +4.724659] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.839101] systemd-fstab-generator[4352]: Ignoring "noauto" option for root device
	[Apr16 01:05] systemd-fstab-generator[4544]: Ignoring "noauto" option for root device
	[  +0.120603] kauditd_printk_skb: 14 callbacks suppressed
	[Apr16 01:06] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [c03a58ac3d73ae22e09299e52c1b1cea91a8f532c2e83efe6db26b0cf0fcd9b6] <==
	{"level":"info","ts":"2024-04-16T01:04:53.473422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 switched to configuration voters=(14690503799911348162)"}
	{"level":"info","ts":"2024-04-16T01:04:53.476163Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","added-peer-id":"cbdf275f553df7c2","added-peer-peer-urls":["https://192.168.39.121:2380"]}
	{"level":"info","ts":"2024-04-16T01:04:53.508372Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.121:2380"}
	{"level":"info","ts":"2024-04-16T01:04:53.508807Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.121:2380"}
	{"level":"info","ts":"2024-04-16T01:04:53.508265Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-16T01:04:53.513077Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"cbdf275f553df7c2","initial-advertise-peer-urls":["https://192.168.39.121:2380"],"listen-peer-urls":["https://192.168.39.121:2380"],"advertise-client-urls":["https://192.168.39.121:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.121:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T01:04:53.517296Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T01:04:54.338381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-16T01:04:54.338437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-16T01:04:54.338507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 1"}
	{"level":"info","ts":"2024-04-16T01:04:54.338529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 became candidate at term 2"}
	{"level":"info","ts":"2024-04-16T01:04:54.338536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 2"}
	{"level":"info","ts":"2024-04-16T01:04:54.338612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 became leader at term 2"}
	{"level":"info","ts":"2024-04-16T01:04:54.338624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 2"}
	{"level":"info","ts":"2024-04-16T01:04:54.340095Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:04:54.341642Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:no-preload-572602 ClientURLs:[https://192.168.39.121:2379]}","request-path":"/0/members/cbdf275f553df7c2/attributes","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T01:04:54.341751Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T01:04:54.341826Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T01:04:54.34592Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	{"level":"info","ts":"2024-04-16T01:04:54.346251Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:04:54.34636Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:04:54.346409Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:04:54.34787Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T01:04:54.354627Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T01:04:54.354666Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 01:14:16 up 14 min,  0 users,  load average: 0.03, 0.13, 0.12
	Linux no-preload-572602 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8cece99507aaa37e50274debcdc6deb652e50017ded358129d85704ff474638e] <==
	I0416 01:08:14.317685       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:09:55.763469       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:09:55.763686       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0416 01:09:56.763930       1 handler_proxy.go:93] no RequestInfo found in the context
	W0416 01:09:56.763937       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:09:56.764159       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 01:09:56.764173       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0416 01:09:56.764068       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 01:09:56.766211       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:10:56.764685       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:10:56.764933       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 01:10:56.764987       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:10:56.767229       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:10:56.767292       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 01:10:56.767318       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:12:56.765213       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:12:56.765338       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 01:12:56.765346       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:12:56.767809       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:12:56.767860       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 01:12:56.767870       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [460245770a3129b38002697725b12fa2bbe8dba33dd691938054fb6a1cb63f63] <==
	I0416 01:08:41.713305       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:09:11.267489       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:09:11.722046       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:09:41.273910       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:09:41.730661       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:10:11.279713       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:10:11.739425       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:10:41.284616       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:10:41.748394       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0416 01:10:57.599536       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="256.73µs"
	E0416 01:11:11.292513       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:11:11.600248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="285.091µs"
	I0416 01:11:11.756627       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:11:41.298655       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:11:41.765471       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:12:11.304910       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:12:11.773425       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:12:41.309943       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:12:41.781537       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:13:11.318096       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:13:11.790762       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:13:41.324165       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:13:41.799156       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:14:11.330276       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:14:11.806787       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4b028f56375a963d3290fb3ca9532c765c119e48afb0243eca99823653744037] <==
	I0416 01:05:13.064125       1 server_linux.go:69] "Using iptables proxy"
	I0416 01:05:13.082146       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.121"]
	I0416 01:05:13.166669       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0416 01:05:13.166707       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 01:05:13.166723       1 server_linux.go:165] "Using iptables Proxier"
	I0416 01:05:13.183080       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 01:05:13.183350       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0416 01:05:13.183685       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 01:05:13.185487       1 config.go:192] "Starting service config controller"
	I0416 01:05:13.185632       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0416 01:05:13.185722       1 config.go:101] "Starting endpoint slice config controller"
	I0416 01:05:13.185770       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0416 01:05:13.187754       1 config.go:319] "Starting node config controller"
	I0416 01:05:13.187821       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0416 01:05:13.287696       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0416 01:05:13.287742       1 shared_informer.go:320] Caches are synced for service config
	I0416 01:05:13.294315       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b895d3cc11f002a93248562e7f584ce2a601717f3f84b5c05201ece6ad116e4e] <==
	W0416 01:04:55.798086       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 01:04:55.798114       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 01:04:55.798227       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 01:04:55.798321       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 01:04:56.615530       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 01:04:56.615681       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 01:04:56.727317       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 01:04:56.727413       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 01:04:56.747821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 01:04:56.748181       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 01:04:56.866479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 01:04:56.866602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 01:04:57.009274       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 01:04:57.009330       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0416 01:04:57.020989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 01:04:57.021044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 01:04:57.070678       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 01:04:57.070737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 01:04:57.100865       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 01:04:57.100925       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 01:04:57.156135       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 01:04:57.156204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 01:04:57.314064       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 01:04:57.314811       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0416 01:04:59.279250       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 01:11:58 no-preload-572602 kubelet[4359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 01:11:58 no-preload-572602 kubelet[4359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 01:11:58 no-preload-572602 kubelet[4359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 01:11:58 no-preload-572602 kubelet[4359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 01:12:02 no-preload-572602 kubelet[4359]: E0416 01:12:02.583663    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:12:13 no-preload-572602 kubelet[4359]: E0416 01:12:13.582738    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:12:28 no-preload-572602 kubelet[4359]: E0416 01:12:28.584409    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:12:39 no-preload-572602 kubelet[4359]: E0416 01:12:39.584744    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:12:52 no-preload-572602 kubelet[4359]: E0416 01:12:52.583106    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:12:58 no-preload-572602 kubelet[4359]: E0416 01:12:58.597723    4359 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 16 01:12:58 no-preload-572602 kubelet[4359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 01:12:58 no-preload-572602 kubelet[4359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 01:12:58 no-preload-572602 kubelet[4359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 01:12:58 no-preload-572602 kubelet[4359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 01:13:07 no-preload-572602 kubelet[4359]: E0416 01:13:07.582444    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:13:19 no-preload-572602 kubelet[4359]: E0416 01:13:19.582903    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:13:31 no-preload-572602 kubelet[4359]: E0416 01:13:31.582630    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:13:46 no-preload-572602 kubelet[4359]: E0416 01:13:46.583433    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:13:58 no-preload-572602 kubelet[4359]: E0416 01:13:58.584345    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:13:58 no-preload-572602 kubelet[4359]: E0416 01:13:58.599479    4359 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 16 01:13:58 no-preload-572602 kubelet[4359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 01:13:58 no-preload-572602 kubelet[4359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 01:13:58 no-preload-572602 kubelet[4359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 01:13:58 no-preload-572602 kubelet[4359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 01:14:09 no-preload-572602 kubelet[4359]: E0416 01:14:09.583173    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	
	
	==> storage-provisioner [4084ca3da80ddb16e306dcabb7c20593f8e97f33727b62127f188994ad25adde] <==
	I0416 01:05:14.033839       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 01:05:14.064061       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 01:05:14.064133       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 01:05:14.080400       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 01:05:14.080682       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-572602_dd85fd78-4f0b-4302-87f3-53cba46d8b5c!
	I0416 01:05:14.082311       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"30217f1a-1bd4-4989-8fbd-f38230eb9a98", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-572602_dd85fd78-4f0b-4302-87f3-53cba46d8b5c became leader
	I0416 01:05:14.181647       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-572602_dd85fd78-4f0b-4302-87f3-53cba46d8b5c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-572602 -n no-preload-572602
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-572602 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-5j5rc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-572602 describe pod metrics-server-569cc877fc-5j5rc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-572602 describe pod metrics-server-569cc877fc-5j5rc: exit status 1 (64.132284ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-5j5rc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-572602 describe pod metrics-server-569cc877fc-5j5rc: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-617092 -n embed-certs-617092
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-16 01:14:45.284973405 +0000 UTC m=+5827.801734723
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-617092 -n embed-certs-617092
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-617092 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-617092 logs -n 25: (2.086426121s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p cert-expiration-359535                              | cert-expiration-359535       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:52 UTC | 16 Apr 24 00:52 UTC |
	| start   | -p newest-cni-012509 --memory=2200 --alsologtostderr   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:52 UTC | 16 Apr 24 00:53 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p newest-cni-012509             | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-012509                  | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-012509 --memory=2200 --alsologtostderr   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:54 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| image   | newest-cni-012509 image list                           | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --format=json                                          |                              |         |                |                     |                     |
	| pause   | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	| delete  | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	| delete  | -p                                                     | disable-driver-mounts-988802 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | disable-driver-mounts-988802                           |                              |         |                |                     |                     |
	| start   | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653942       | default-k8s-diff-port-653942 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-572602                  | no-preload-572602            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653942 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 01:06 UTC |
	|         | default-k8s-diff-port-653942                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-800769        | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| start   | -p no-preload-572602                                   | no-preload-572602            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 01:05 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-617092            | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-800769                              | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-800769             | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-800769                              | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-617092                 | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:58 UTC | 16 Apr 24 01:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 00:58:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 00:58:42.797832   62747 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:58:42.797983   62747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:58:42.797994   62747 out.go:304] Setting ErrFile to fd 2...
	I0416 00:58:42.797998   62747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:58:42.798182   62747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:58:42.798686   62747 out.go:298] Setting JSON to false
	I0416 00:58:42.799629   62747 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6067,"bootTime":1713223056,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 00:58:42.799687   62747 start.go:139] virtualization: kvm guest
	I0416 00:58:42.801878   62747 out.go:177] * [embed-certs-617092] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 00:58:42.803202   62747 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 00:58:42.804389   62747 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 00:58:42.803288   62747 notify.go:220] Checking for updates...
	I0416 00:58:42.805742   62747 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 00:58:42.807023   62747 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:58:42.808185   62747 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 00:58:42.809402   62747 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 00:58:42.811188   62747 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:58:42.811772   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:58:42.811833   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:58:42.826377   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44973
	I0416 00:58:42.826730   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:58:42.827217   62747 main.go:141] libmachine: Using API Version  1
	I0416 00:58:42.827233   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:58:42.827541   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:58:42.827737   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 00:58:42.827964   62747 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 00:58:42.828239   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:58:42.828274   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:58:42.842499   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
	I0416 00:58:42.842872   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:58:42.843283   62747 main.go:141] libmachine: Using API Version  1
	I0416 00:58:42.843300   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:58:42.843636   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:58:42.843830   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 00:58:42.874583   62747 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 00:58:42.875910   62747 start.go:297] selected driver: kvm2
	I0416 00:58:42.875933   62747 start.go:901] validating driver "kvm2" against &{Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:58:42.876072   62747 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 00:58:42.876741   62747 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:58:42.876826   62747 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 00:58:42.890834   62747 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 00:58:42.891212   62747 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 00:58:42.891270   62747 cni.go:84] Creating CNI manager for ""
	I0416 00:58:42.891283   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:58:42.891314   62747 start.go:340] cluster config:
	{Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-617092 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:58:42.891412   62747 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:58:42.893179   62747 out.go:177] * Starting "embed-certs-617092" primary control-plane node in "embed-certs-617092" cluster
	I0416 00:58:42.894232   62747 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 00:58:42.894260   62747 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 00:58:42.894267   62747 cache.go:56] Caching tarball of preloaded images
	I0416 00:58:42.894353   62747 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 00:58:42.894365   62747 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 00:58:42.894458   62747 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/config.json ...
	I0416 00:58:42.894628   62747 start.go:360] acquireMachinesLock for embed-certs-617092: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:58:47.545405   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:58:50.617454   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:58:56.697459   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:58:59.769461   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:05.849462   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:08.921459   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:15.001430   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:21.078070   61500 start.go:364] duration metric: took 4m33.431027521s to acquireMachinesLock for "no-preload-572602"
	I0416 00:59:21.078134   61500 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:59:21.078152   61500 fix.go:54] fixHost starting: 
	I0416 00:59:21.078760   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:59:21.078809   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:59:21.093476   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36767
	I0416 00:59:21.093934   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:59:21.094422   61500 main.go:141] libmachine: Using API Version  1
	I0416 00:59:21.094448   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:59:21.094749   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:59:21.094902   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:21.095048   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 00:59:21.096678   61500 fix.go:112] recreateIfNeeded on no-preload-572602: state=Stopped err=<nil>
	I0416 00:59:21.096697   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	W0416 00:59:21.096846   61500 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:59:21.098527   61500 out.go:177] * Restarting existing kvm2 VM for "no-preload-572602" ...
	I0416 00:59:18.073453   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:21.075633   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:59:21.075671   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 00:59:21.075991   61267 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653942"
	I0416 00:59:21.076014   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 00:59:21.076225   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 00:59:21.077923   61267 machine.go:97] duration metric: took 4m34.542024225s to provisionDockerMachine
	I0416 00:59:21.077967   61267 fix.go:56] duration metric: took 4m34.567596715s for fixHost
	I0416 00:59:21.077978   61267 start.go:83] releasing machines lock for "default-k8s-diff-port-653942", held for 4m34.567645643s
	W0416 00:59:21.078001   61267 start.go:713] error starting host: provision: host is not running
	W0416 00:59:21.078088   61267 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0416 00:59:21.078097   61267 start.go:728] Will try again in 5 seconds ...
	I0416 00:59:21.099788   61500 main.go:141] libmachine: (no-preload-572602) Calling .Start
	I0416 00:59:21.099966   61500 main.go:141] libmachine: (no-preload-572602) Ensuring networks are active...
	I0416 00:59:21.100656   61500 main.go:141] libmachine: (no-preload-572602) Ensuring network default is active
	I0416 00:59:21.100937   61500 main.go:141] libmachine: (no-preload-572602) Ensuring network mk-no-preload-572602 is active
	I0416 00:59:21.101282   61500 main.go:141] libmachine: (no-preload-572602) Getting domain xml...
	I0416 00:59:21.101905   61500 main.go:141] libmachine: (no-preload-572602) Creating domain...
	I0416 00:59:22.294019   61500 main.go:141] libmachine: (no-preload-572602) Waiting to get IP...
	I0416 00:59:22.294922   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:22.295294   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:22.295349   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:22.295262   62936 retry.go:31] will retry after 220.952312ms: waiting for machine to come up
	I0416 00:59:22.517753   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:22.518334   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:22.518358   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:22.518287   62936 retry.go:31] will retry after 377.547009ms: waiting for machine to come up
	I0416 00:59:26.081716   61267 start.go:360] acquireMachinesLock for default-k8s-diff-port-653942: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:59:22.897924   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:22.898442   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:22.898465   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:22.898394   62936 retry.go:31] will retry after 450.415086ms: waiting for machine to come up
	I0416 00:59:23.349893   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:23.350383   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:23.350420   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:23.350333   62936 retry.go:31] will retry after 385.340718ms: waiting for machine to come up
	I0416 00:59:23.736854   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:23.737225   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:23.737262   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:23.737205   62936 retry.go:31] will retry after 696.175991ms: waiting for machine to come up
	I0416 00:59:24.435231   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:24.435587   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:24.435616   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:24.435557   62936 retry.go:31] will retry after 644.402152ms: waiting for machine to come up
	I0416 00:59:25.081355   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:25.081660   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:25.081697   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:25.081626   62936 retry.go:31] will retry after 809.585997ms: waiting for machine to come up
	I0416 00:59:25.892402   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:25.892767   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:25.892797   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:25.892722   62936 retry.go:31] will retry after 1.07477705s: waiting for machine to come up
	I0416 00:59:26.969227   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:26.969617   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:26.969646   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:26.969561   62936 retry.go:31] will retry after 1.243937595s: waiting for machine to come up
	I0416 00:59:28.214995   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:28.215412   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:28.215433   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:28.215364   62936 retry.go:31] will retry after 1.775188434s: waiting for machine to come up
	I0416 00:59:29.993420   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:29.993825   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:29.993853   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:29.993779   62936 retry.go:31] will retry after 2.73873778s: waiting for machine to come up
	I0416 00:59:32.735350   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:32.735758   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:32.735809   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:32.735721   62936 retry.go:31] will retry after 2.208871896s: waiting for machine to come up
	I0416 00:59:34.947005   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:34.947400   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:34.947431   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:34.947358   62936 retry.go:31] will retry after 4.484880009s: waiting for machine to come up
	I0416 00:59:40.669954   62139 start.go:364] duration metric: took 3m18.466569456s to acquireMachinesLock for "old-k8s-version-800769"
	I0416 00:59:40.670015   62139 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:59:40.670038   62139 fix.go:54] fixHost starting: 
	I0416 00:59:40.670411   62139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:59:40.670448   62139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:59:40.686269   62139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39043
	I0416 00:59:40.686633   62139 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:59:40.687125   62139 main.go:141] libmachine: Using API Version  1
	I0416 00:59:40.687162   62139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:59:40.687481   62139 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:59:40.687672   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:40.687838   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetState
	I0416 00:59:40.689108   62139 fix.go:112] recreateIfNeeded on old-k8s-version-800769: state=Stopped err=<nil>
	I0416 00:59:40.689132   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	W0416 00:59:40.689286   62139 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:59:40.691869   62139 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-800769" ...
	I0416 00:59:40.693292   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .Start
	I0416 00:59:40.693450   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring networks are active...
	I0416 00:59:40.694152   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring network default is active
	I0416 00:59:40.694457   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring network mk-old-k8s-version-800769 is active
	I0416 00:59:40.694883   62139 main.go:141] libmachine: (old-k8s-version-800769) Getting domain xml...
	I0416 00:59:40.695720   62139 main.go:141] libmachine: (old-k8s-version-800769) Creating domain...
	I0416 00:59:41.913001   62139 main.go:141] libmachine: (old-k8s-version-800769) Waiting to get IP...
	I0416 00:59:41.913874   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:41.914260   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:41.914318   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:41.914237   63071 retry.go:31] will retry after 261.032707ms: waiting for machine to come up
	I0416 00:59:39.436244   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.436664   61500 main.go:141] libmachine: (no-preload-572602) Found IP for machine: 192.168.39.121
	I0416 00:59:39.436686   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has current primary IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.436694   61500 main.go:141] libmachine: (no-preload-572602) Reserving static IP address...
	I0416 00:59:39.437114   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "no-preload-572602", mac: "52:54:00:fb:a5:f3", ip: "192.168.39.121"} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.437151   61500 main.go:141] libmachine: (no-preload-572602) Reserved static IP address: 192.168.39.121
	I0416 00:59:39.437183   61500 main.go:141] libmachine: (no-preload-572602) DBG | skip adding static IP to network mk-no-preload-572602 - found existing host DHCP lease matching {name: "no-preload-572602", mac: "52:54:00:fb:a5:f3", ip: "192.168.39.121"}
	I0416 00:59:39.437197   61500 main.go:141] libmachine: (no-preload-572602) Waiting for SSH to be available...
	I0416 00:59:39.437215   61500 main.go:141] libmachine: (no-preload-572602) DBG | Getting to WaitForSSH function...
	I0416 00:59:39.439255   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.439613   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.439642   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.439723   61500 main.go:141] libmachine: (no-preload-572602) DBG | Using SSH client type: external
	I0416 00:59:39.439756   61500 main.go:141] libmachine: (no-preload-572602) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa (-rw-------)
	I0416 00:59:39.439799   61500 main.go:141] libmachine: (no-preload-572602) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.121 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 00:59:39.439822   61500 main.go:141] libmachine: (no-preload-572602) DBG | About to run SSH command:
	I0416 00:59:39.439835   61500 main.go:141] libmachine: (no-preload-572602) DBG | exit 0
	I0416 00:59:39.565190   61500 main.go:141] libmachine: (no-preload-572602) DBG | SSH cmd err, output: <nil>: 
	I0416 00:59:39.565584   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetConfigRaw
	I0416 00:59:39.566223   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:39.568572   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.568869   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.568906   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.569083   61500 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/config.json ...
	I0416 00:59:39.569300   61500 machine.go:94] provisionDockerMachine start ...
	I0416 00:59:39.569318   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:39.569526   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.571536   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.571842   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.571868   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.572004   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:39.572189   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.572352   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.572505   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:39.572751   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:39.572974   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:39.572991   61500 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:59:39.681544   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 00:59:39.681574   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetMachineName
	I0416 00:59:39.681845   61500 buildroot.go:166] provisioning hostname "no-preload-572602"
	I0416 00:59:39.681874   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetMachineName
	I0416 00:59:39.682088   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.684694   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.685029   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.685063   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.685259   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:39.685453   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.685608   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.685737   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:39.685887   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:39.686066   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:39.686090   61500 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-572602 && echo "no-preload-572602" | sudo tee /etc/hostname
	I0416 00:59:39.804124   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-572602
	
	I0416 00:59:39.804149   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.807081   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.807447   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.807480   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.807651   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:39.807860   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.808048   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.808202   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:39.808393   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:39.808618   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:39.808644   61500 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-572602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-572602/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-572602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:59:39.921781   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:59:39.921824   61500 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:59:39.921847   61500 buildroot.go:174] setting up certificates
	I0416 00:59:39.921857   61500 provision.go:84] configureAuth start
	I0416 00:59:39.921872   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetMachineName
	I0416 00:59:39.922150   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:39.924726   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.925052   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.925081   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.925199   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.927315   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.927820   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.927869   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.927934   61500 provision.go:143] copyHostCerts
	I0416 00:59:39.928005   61500 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:59:39.928031   61500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:59:39.928122   61500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:59:39.928231   61500 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:59:39.928241   61500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:59:39.928284   61500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:59:39.928370   61500 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:59:39.928379   61500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:59:39.928428   61500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:59:39.928498   61500 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.no-preload-572602 san=[127.0.0.1 192.168.39.121 localhost minikube no-preload-572602]
	I0416 00:59:40.000129   61500 provision.go:177] copyRemoteCerts
	I0416 00:59:40.000200   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:59:40.000236   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.002726   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.003028   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.003057   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.003168   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.003351   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.003471   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.003577   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.087468   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:59:40.115336   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0416 00:59:40.142695   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 00:59:40.169631   61500 provision.go:87] duration metric: took 247.759459ms to configureAuth
	I0416 00:59:40.169657   61500 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:59:40.169824   61500 config.go:182] Loaded profile config "no-preload-572602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 00:59:40.169906   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.172164   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.172503   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.172531   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.172689   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.172875   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.173033   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.173182   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.173311   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:40.173465   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:40.173480   61500 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:59:40.437143   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:59:40.437182   61500 machine.go:97] duration metric: took 867.868152ms to provisionDockerMachine
	I0416 00:59:40.437194   61500 start.go:293] postStartSetup for "no-preload-572602" (driver="kvm2")
	I0416 00:59:40.437211   61500 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:59:40.437233   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.437536   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:59:40.437564   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.440246   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.440596   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.440637   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.440759   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.440981   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.441186   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.441319   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.524157   61500 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:59:40.528556   61500 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:59:40.528580   61500 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:59:40.528647   61500 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:59:40.528756   61500 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:59:40.528877   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:59:40.538275   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:59:40.562693   61500 start.go:296] duration metric: took 125.48438ms for postStartSetup
	I0416 00:59:40.562728   61500 fix.go:56] duration metric: took 19.484586221s for fixHost
	I0416 00:59:40.562746   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.565410   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.565717   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.565756   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.565920   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.566103   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.566269   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.566438   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.566587   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:40.566738   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:40.566749   61500 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 00:59:40.669778   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229180.641382554
	
	I0416 00:59:40.669802   61500 fix.go:216] guest clock: 1713229180.641382554
	I0416 00:59:40.669811   61500 fix.go:229] Guest: 2024-04-16 00:59:40.641382554 +0000 UTC Remote: 2024-04-16 00:59:40.56273146 +0000 UTC m=+293.069651959 (delta=78.651094ms)
	I0416 00:59:40.669839   61500 fix.go:200] guest clock delta is within tolerance: 78.651094ms
	I0416 00:59:40.669857   61500 start.go:83] releasing machines lock for "no-preload-572602", held for 19.591740017s
	I0416 00:59:40.669883   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.670163   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:40.672800   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.673187   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.673234   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.673386   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.673841   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.673993   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.674067   61500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:59:40.674115   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.674155   61500 ssh_runner.go:195] Run: cat /version.json
	I0416 00:59:40.674174   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.676617   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.676776   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.677006   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.677030   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.677126   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.677277   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.677299   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.677336   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.677499   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.677511   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.677635   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.677768   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.678072   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.678224   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.787049   61500 ssh_runner.go:195] Run: systemctl --version
	I0416 00:59:40.793568   61500 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:59:40.941445   61500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 00:59:40.949062   61500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:59:40.949177   61500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:59:40.966425   61500 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 00:59:40.966454   61500 start.go:494] detecting cgroup driver to use...
	I0416 00:59:40.966525   61500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:59:40.985126   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:59:40.999931   61500 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:59:41.000004   61500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:59:41.015597   61500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:59:41.030610   61500 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:59:41.151240   61500 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 00:59:41.312384   61500 docker.go:233] disabling docker service ...
	I0416 00:59:41.312464   61500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 00:59:41.329263   61500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 00:59:41.345192   61500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 00:59:41.463330   61500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 00:59:41.595259   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 00:59:41.610495   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 00:59:41.632527   61500 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 00:59:41.632580   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.644625   61500 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 00:59:41.644723   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.656056   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.667069   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.682783   61500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 00:59:41.694760   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.712505   61500 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.737338   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.747518   61500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 00:59:41.756586   61500 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 00:59:41.756656   61500 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 00:59:41.769230   61500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 00:59:41.778424   61500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:59:41.894135   61500 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 00:59:42.039732   61500 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 00:59:42.039812   61500 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 00:59:42.044505   61500 start.go:562] Will wait 60s for crictl version
	I0416 00:59:42.044551   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.049632   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 00:59:42.106886   61500 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 00:59:42.106981   61500 ssh_runner.go:195] Run: crio --version
	I0416 00:59:42.137092   61500 ssh_runner.go:195] Run: crio --version
	I0416 00:59:42.170036   61500 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0416 00:59:42.171395   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:42.174790   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:42.175217   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:42.175250   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:42.175506   61500 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 00:59:42.180987   61500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:59:42.198472   61500 kubeadm.go:877] updating cluster {Name:no-preload-572602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.2 ClusterName:no-preload-572602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 00:59:42.198595   61500 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0416 00:59:42.198639   61500 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:59:42.236057   61500 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.2". assuming images are not preloaded.
	I0416 00:59:42.236084   61500 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.2 registry.k8s.io/kube-controller-manager:v1.30.0-rc.2 registry.k8s.io/kube-scheduler:v1.30.0-rc.2 registry.k8s.io/kube-proxy:v1.30.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 00:59:42.236146   61500 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:42.236166   61500 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.236180   61500 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.236182   61500 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.236212   61500 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.236238   61500 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0416 00:59:42.236287   61500 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.236164   61500 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.237740   61500 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.237756   61500 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0416 00:59:42.237763   61500 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.237779   61500 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.237740   61500 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.237848   61500 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.237847   61500 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:42.238087   61500 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.410682   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.445824   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.446874   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0416 00:59:42.448854   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.449450   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.452121   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.458966   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.480556   61500 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.2" does not exist at hash "461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6" in container runtime
	I0416 00:59:42.480608   61500 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.480670   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.176660   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.177053   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.177084   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.177031   63071 retry.go:31] will retry after 268.951362ms: waiting for machine to come up
	I0416 00:59:42.447724   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.448132   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.448159   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.448097   63071 retry.go:31] will retry after 293.793417ms: waiting for machine to come up
	I0416 00:59:42.743375   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.743845   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.743874   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.743801   63071 retry.go:31] will retry after 494.163372ms: waiting for machine to come up
	I0416 00:59:43.239314   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:43.239761   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:43.239790   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:43.239708   63071 retry.go:31] will retry after 698.851999ms: waiting for machine to come up
	I0416 00:59:43.939998   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:43.940577   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:43.940607   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:43.940535   63071 retry.go:31] will retry after 764.693004ms: waiting for machine to come up
	I0416 00:59:44.706335   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:44.706673   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:44.706724   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:44.706626   63071 retry.go:31] will retry after 874.082115ms: waiting for machine to come up
	I0416 00:59:45.581896   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:45.582331   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:45.582361   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:45.582280   63071 retry.go:31] will retry after 966.259345ms: waiting for machine to come up
	I0416 00:59:46.550671   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:46.551111   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:46.551140   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:46.551062   63071 retry.go:31] will retry after 1.191034468s: waiting for machine to come up
	I0416 00:59:42.583284   61500 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0416 00:59:42.583332   61500 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.583377   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.724785   61500 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2" does not exist at hash "ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b" in container runtime
	I0416 00:59:42.724827   61500 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.724878   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.724899   61500 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0416 00:59:42.724938   61500 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.724938   61500 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.2" does not exist at hash "35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e" in container runtime
	I0416 00:59:42.724964   61500 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.724979   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.724993   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.725019   61500 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.2" does not exist at hash "65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1" in container runtime
	I0416 00:59:42.725051   61500 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.725063   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.725088   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.725102   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.739346   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.739764   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.787888   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.787977   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.788024   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.788084   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.815167   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0416 00:59:42.815274   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0416 00:59:42.845627   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0416 00:59:42.845741   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0416 00:59:42.848065   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:42.848134   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:42.880543   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:42.880557   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2 (exists)
	I0416 00:59:42.880575   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.880628   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.880648   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:42.907207   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0416 00:59:42.907245   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0416 00:59:42.907269   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.2
	I0416 00:59:42.907295   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2 (exists)
	I0416 00:59:42.907334   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2 (exists)
	I0416 00:59:42.907350   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 00:59:43.138705   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:44.951278   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2: (2.07061835s)
	I0416 00:59:44.951295   61500 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2: (2.04392036s)
	I0416 00:59:44.951348   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2 (exists)
	I0416 00:59:44.951309   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2 from cache
	I0416 00:59:44.951364   61500 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.812619758s)
	I0416 00:59:44.951410   61500 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0416 00:59:44.951448   61500 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:44.951374   61500 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0416 00:59:44.951506   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:44.951508   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0416 00:59:47.744187   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:47.744683   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:47.744712   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:47.744637   63071 retry.go:31] will retry after 2.263605663s: waiting for machine to come up
	I0416 00:59:50.011136   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:50.011605   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:50.011632   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:50.011566   63071 retry.go:31] will retry after 2.648982849s: waiting for machine to come up
	I0416 00:59:48.656623   61500 ssh_runner.go:235] Completed: which crictl: (3.705085257s)
	I0416 00:59:48.656705   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:48.656715   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.705109475s)
	I0416 00:59:48.656743   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0416 00:59:48.656769   61500 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0416 00:59:48.656798   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0416 00:59:50.560030   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.903209359s)
	I0416 00:59:50.560071   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0416 00:59:50.560085   61500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.90335887s)
	I0416 00:59:50.560096   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:50.560148   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:50.560151   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0416 00:59:50.560309   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0416 00:59:52.662443   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:52.662852   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:52.662883   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:52.662815   63071 retry.go:31] will retry after 2.183508059s: waiting for machine to come up
	I0416 00:59:54.849225   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:54.849701   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:54.849734   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:54.849649   63071 retry.go:31] will retry after 3.201585234s: waiting for machine to come up
	I0416 00:59:52.739620   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2: (2.179436189s)
	I0416 00:59:52.739658   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2 from cache
	I0416 00:59:52.739688   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:52.739697   61500 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.179365348s)
	I0416 00:59:52.739724   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0416 00:59:52.739747   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:55.098350   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2: (2.358579586s)
	I0416 00:59:55.098381   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2 from cache
	I0416 00:59:55.098408   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 00:59:55.098454   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 00:59:57.166586   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2: (2.068105529s)
	I0416 00:59:57.166615   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.2 from cache
	I0416 00:59:57.166644   61500 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0416 00:59:57.166697   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0416 00:59:59.394339   62747 start.go:364] duration metric: took 1m16.499681915s to acquireMachinesLock for "embed-certs-617092"
	I0416 00:59:59.394389   62747 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:59:59.394412   62747 fix.go:54] fixHost starting: 
	I0416 00:59:59.394834   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:59:59.394896   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:59:59.414712   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38637
	I0416 00:59:59.415464   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:59:59.416123   62747 main.go:141] libmachine: Using API Version  1
	I0416 00:59:59.416150   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:59:59.416436   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:59:59.416623   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 00:59:59.416786   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 00:59:59.418413   62747 fix.go:112] recreateIfNeeded on embed-certs-617092: state=Stopped err=<nil>
	I0416 00:59:59.418449   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	W0416 00:59:59.418609   62747 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:59:59.420560   62747 out.go:177] * Restarting existing kvm2 VM for "embed-certs-617092" ...
	I0416 00:59:58.052613   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.053048   62139 main.go:141] libmachine: (old-k8s-version-800769) Found IP for machine: 192.168.83.98
	I0416 00:59:58.053073   62139 main.go:141] libmachine: (old-k8s-version-800769) Reserving static IP address...
	I0416 00:59:58.053089   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has current primary IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.053517   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "old-k8s-version-800769", mac: "52:54:00:a1:ad:da", ip: "192.168.83.98"} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.053549   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | skip adding static IP to network mk-old-k8s-version-800769 - found existing host DHCP lease matching {name: "old-k8s-version-800769", mac: "52:54:00:a1:ad:da", ip: "192.168.83.98"}
	I0416 00:59:58.053569   62139 main.go:141] libmachine: (old-k8s-version-800769) Reserved static IP address: 192.168.83.98
	I0416 00:59:58.053587   62139 main.go:141] libmachine: (old-k8s-version-800769) Waiting for SSH to be available...
	I0416 00:59:58.053602   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Getting to WaitForSSH function...
	I0416 00:59:58.055598   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.055907   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.055941   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.056038   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Using SSH client type: external
	I0416 00:59:58.056088   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa (-rw-------)
	I0416 00:59:58.056132   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.98 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 00:59:58.056149   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | About to run SSH command:
	I0416 00:59:58.056162   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | exit 0
	I0416 00:59:58.185675   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | SSH cmd err, output: <nil>: 
	I0416 00:59:58.186055   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetConfigRaw
	I0416 00:59:58.186802   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:58.189772   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.190219   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.190257   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.190448   62139 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/config.json ...
	I0416 00:59:58.190666   62139 machine.go:94] provisionDockerMachine start ...
	I0416 00:59:58.190685   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:58.190902   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.193570   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.193954   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.193982   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.194139   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.194337   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.194492   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.194636   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.194786   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.195041   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.195056   62139 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:59:58.321824   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 00:59:58.321857   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.322146   62139 buildroot.go:166] provisioning hostname "old-k8s-version-800769"
	I0416 00:59:58.322175   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.322381   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.324941   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.325288   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.325316   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.325423   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.325613   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.325776   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.325936   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.326109   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.326322   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.326339   62139 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-800769 && echo "old-k8s-version-800769" | sudo tee /etc/hostname
	I0416 00:59:58.455194   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-800769
	
	I0416 00:59:58.455236   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.458021   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.458423   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.458458   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.458662   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.458848   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.459013   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.459162   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.459353   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.459507   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.459524   62139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-800769' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-800769/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-800769' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:59:58.587318   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:59:58.587351   62139 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:59:58.587391   62139 buildroot.go:174] setting up certificates
	I0416 00:59:58.587400   62139 provision.go:84] configureAuth start
	I0416 00:59:58.587413   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.587686   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:58.590415   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.590739   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.590778   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.590880   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.593282   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.593728   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.593759   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.593931   62139 provision.go:143] copyHostCerts
	I0416 00:59:58.593988   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:59:58.594007   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:59:58.594079   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:59:58.594213   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:59:58.594222   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:59:58.594263   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:59:58.594372   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:59:58.594383   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:59:58.594408   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:59:58.594470   62139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-800769 san=[127.0.0.1 192.168.83.98 localhost minikube old-k8s-version-800769]
	I0416 00:59:58.692127   62139 provision.go:177] copyRemoteCerts
	I0416 00:59:58.692197   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:59:58.692232   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.694858   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.695231   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.695278   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.695507   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.695693   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.695852   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.695994   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:58.783458   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:59:58.811124   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0416 00:59:58.836495   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 00:59:58.862044   62139 provision.go:87] duration metric: took 274.632117ms to configureAuth
	I0416 00:59:58.862068   62139 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:59:58.862278   62139 config.go:182] Loaded profile config "old-k8s-version-800769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0416 00:59:58.862361   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.865352   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.865795   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.865829   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.866043   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.866228   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.866435   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.866625   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.866805   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.867008   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.867026   62139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:59:59.143874   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:59:59.143900   62139 machine.go:97] duration metric: took 953.218972ms to provisionDockerMachine
	I0416 00:59:59.143914   62139 start.go:293] postStartSetup for "old-k8s-version-800769" (driver="kvm2")
	I0416 00:59:59.143927   62139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:59:59.143972   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.144277   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:59:59.144302   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.147021   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.147355   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.147385   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.147649   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.147871   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.148036   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.148174   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.236981   62139 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:59:59.241388   62139 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:59:59.241411   62139 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:59:59.241469   62139 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:59:59.241534   62139 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:59:59.241619   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:59:59.251688   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:59:59.275189   62139 start.go:296] duration metric: took 131.262042ms for postStartSetup
	I0416 00:59:59.275227   62139 fix.go:56] duration metric: took 18.605201288s for fixHost
	I0416 00:59:59.275250   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.277804   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.278153   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.278186   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.278341   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.278581   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.278741   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.278908   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.279068   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:59.279233   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:59.279243   62139 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 00:59:59.394108   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229199.360202150
	
	I0416 00:59:59.394141   62139 fix.go:216] guest clock: 1713229199.360202150
	I0416 00:59:59.394152   62139 fix.go:229] Guest: 2024-04-16 00:59:59.36020215 +0000 UTC Remote: 2024-04-16 00:59:59.27523174 +0000 UTC m=+217.222314955 (delta=84.97041ms)
	I0416 00:59:59.394211   62139 fix.go:200] guest clock delta is within tolerance: 84.97041ms
	I0416 00:59:59.394218   62139 start.go:83] releasing machines lock for "old-k8s-version-800769", held for 18.724230851s
	I0416 00:59:59.394252   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.394554   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:59.397241   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.397670   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.397703   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.397897   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398460   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398650   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398740   62139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:59:59.398782   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.399049   62139 ssh_runner.go:195] Run: cat /version.json
	I0416 00:59:59.399072   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.401397   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401656   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401802   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.401825   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401964   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.402017   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.402089   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.402173   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.402248   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.402320   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.402376   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.402430   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.402577   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.402638   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.481834   62139 ssh_runner.go:195] Run: systemctl --version
	I0416 00:59:59.516372   62139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:59:59.666722   62139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 00:59:59.674165   62139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:59:59.674226   62139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:59:59.695545   62139 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 00:59:59.695573   62139 start.go:494] detecting cgroup driver to use...
	I0416 00:59:59.695646   62139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:59:59.715091   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:59:59.732004   62139 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:59:59.732060   62139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:59:59.753217   62139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:59:59.768513   62139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:59:59.898693   62139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:00:00.066535   62139 docker.go:233] disabling docker service ...
	I0416 01:00:00.066607   62139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:00:00.084512   62139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:00:00.097714   62139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:00:00.232901   62139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:00:00.378379   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:00:00.395191   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:00:00.416631   62139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0416 01:00:00.416695   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.428712   62139 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:00:00.428774   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.442687   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.454631   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.466151   62139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:00:00.478459   62139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:00:00.489957   62139 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:00:00.490035   62139 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:00:00.506087   62139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:00:00.518100   62139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:00.676317   62139 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:00:00.869766   62139 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:00:00.869855   62139 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:00:00.875363   62139 start.go:562] Will wait 60s for crictl version
	I0416 01:00:00.875424   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:00.880947   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:00:00.924780   62139 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:00:00.924852   62139 ssh_runner.go:195] Run: crio --version
	I0416 01:00:00.958390   62139 ssh_runner.go:195] Run: crio --version
	I0416 01:00:00.993114   62139 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0416 01:00:00.994513   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 01:00:00.997571   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 01:00:00.998032   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 01:00:00.998065   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 01:00:00.998273   62139 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0416 01:00:01.002750   62139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:01.015709   62139 kubeadm.go:877] updating cluster {Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.98 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:00:01.015810   62139 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 01:00:01.015853   62139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:01.063257   62139 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 01:00:01.063331   62139 ssh_runner.go:195] Run: which lz4
	I0416 01:00:01.067973   62139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:00:01.072369   62139 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:00:01.072400   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0416 00:59:57.817013   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0416 00:59:57.817060   61500 cache_images.go:123] Successfully loaded all cached images
	I0416 00:59:57.817073   61500 cache_images.go:92] duration metric: took 15.580967615s to LoadCachedImages
	I0416 00:59:57.817087   61500 kubeadm.go:928] updating node { 192.168.39.121 8443 v1.30.0-rc.2 crio true true} ...
	I0416 00:59:57.817241   61500 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-572602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:no-preload-572602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 00:59:57.817324   61500 ssh_runner.go:195] Run: crio config
	I0416 00:59:57.866116   61500 cni.go:84] Creating CNI manager for ""
	I0416 00:59:57.866140   61500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:59:57.866154   61500 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 00:59:57.866189   61500 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.121 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-572602 NodeName:no-preload-572602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.121 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 00:59:57.866325   61500 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.121
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-572602"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.121
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.121"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 00:59:57.866390   61500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0416 00:59:57.876619   61500 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 00:59:57.876689   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 00:59:57.886472   61500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0416 00:59:57.903172   61500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0416 00:59:57.919531   61500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0416 00:59:57.936394   61500 ssh_runner.go:195] Run: grep 192.168.39.121	control-plane.minikube.internal$ /etc/hosts
	I0416 00:59:57.940161   61500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.121	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:59:57.951997   61500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:59:58.089553   61500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 00:59:58.117870   61500 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602 for IP: 192.168.39.121
	I0416 00:59:58.117926   61500 certs.go:194] generating shared ca certs ...
	I0416 00:59:58.117949   61500 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:59:58.118136   61500 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 00:59:58.118199   61500 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 00:59:58.118216   61500 certs.go:256] generating profile certs ...
	I0416 00:59:58.118351   61500 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/client.key
	I0416 00:59:58.118446   61500 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/apiserver.key.a3b1330f
	I0416 00:59:58.118505   61500 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/proxy-client.key
	I0416 00:59:58.118664   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 00:59:58.118708   61500 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 00:59:58.118721   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 00:59:58.118756   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 00:59:58.118786   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 00:59:58.118814   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 00:59:58.118874   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:59:58.119738   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 00:59:58.150797   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 00:59:58.181693   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 00:59:58.231332   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 00:59:58.276528   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 00:59:58.301000   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 00:59:58.326090   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 00:59:58.350254   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 00:59:58.377597   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 00:59:58.401548   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 00:59:58.425237   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 00:59:58.449748   61500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 00:59:58.468346   61500 ssh_runner.go:195] Run: openssl version
	I0416 00:59:58.474164   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 00:59:58.485674   61500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 00:59:58.490136   61500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 00:59:58.490203   61500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 00:59:58.495781   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 00:59:58.507047   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 00:59:58.518007   61500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 00:59:58.522317   61500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 00:59:58.522364   61500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 00:59:58.527809   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 00:59:58.538579   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 00:59:58.549188   61500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:59:58.553688   61500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:59:58.553732   61500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:59:58.559175   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 00:59:58.570142   61500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 00:59:58.574657   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 00:59:58.580560   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 00:59:58.586319   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 00:59:58.593938   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 00:59:58.599808   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 00:59:58.605583   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 00:59:58.611301   61500 kubeadm.go:391] StartCluster: {Name:no-preload-572602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.2 ClusterName:no-preload-572602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:59:58.611385   61500 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 00:59:58.611439   61500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:59:58.655244   61500 cri.go:89] found id: ""
	I0416 00:59:58.655315   61500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 00:59:58.667067   61500 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 00:59:58.667082   61500 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 00:59:58.667088   61500 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 00:59:58.667128   61500 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 00:59:58.678615   61500 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:59:58.680097   61500 kubeconfig.go:125] found "no-preload-572602" server: "https://192.168.39.121:8443"
	I0416 00:59:58.683135   61500 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 00:59:58.695291   61500 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.121
	I0416 00:59:58.695323   61500 kubeadm.go:1154] stopping kube-system containers ...
	I0416 00:59:58.695337   61500 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 00:59:58.695380   61500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:59:58.731743   61500 cri.go:89] found id: ""
	I0416 00:59:58.731832   61500 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 00:59:58.748125   61500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 00:59:58.757845   61500 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 00:59:58.757865   61500 kubeadm.go:156] found existing configuration files:
	
	I0416 00:59:58.757918   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 00:59:58.766993   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 00:59:58.767036   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 00:59:58.776831   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 00:59:58.786420   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 00:59:58.786467   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 00:59:58.796067   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 00:59:58.805385   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 00:59:58.805511   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 00:59:58.815313   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 00:59:58.826551   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 00:59:58.826603   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 00:59:58.836652   61500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 00:59:58.848671   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 00:59:58.967511   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.416009   61500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.44846758s)
	I0416 01:00:00.416041   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.657784   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.741694   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.876550   61500 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:00.876630   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:01.377586   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:01.877647   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:01.950167   61500 api_server.go:72] duration metric: took 1.073614574s to wait for apiserver process to appear ...
	I0416 01:00:01.950201   61500 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:00:01.950224   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:01.950854   61500 api_server.go:269] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
	I0416 01:00:02.450437   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 00:59:59.421878   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Start
	I0416 00:59:59.422036   62747 main.go:141] libmachine: (embed-certs-617092) Ensuring networks are active...
	I0416 00:59:59.422646   62747 main.go:141] libmachine: (embed-certs-617092) Ensuring network default is active
	I0416 00:59:59.422931   62747 main.go:141] libmachine: (embed-certs-617092) Ensuring network mk-embed-certs-617092 is active
	I0416 00:59:59.423360   62747 main.go:141] libmachine: (embed-certs-617092) Getting domain xml...
	I0416 00:59:59.424005   62747 main.go:141] libmachine: (embed-certs-617092) Creating domain...
	I0416 01:00:00.682582   62747 main.go:141] libmachine: (embed-certs-617092) Waiting to get IP...
	I0416 01:00:00.683684   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:00.684222   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:00.684277   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:00.684198   63257 retry.go:31] will retry after 196.582767ms: waiting for machine to come up
	I0416 01:00:00.882954   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:00.883544   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:00.883577   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:00.883482   63257 retry.go:31] will retry after 309.274692ms: waiting for machine to come up
	I0416 01:00:01.193848   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:01.194286   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:01.194325   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:01.194234   63257 retry.go:31] will retry after 379.332728ms: waiting for machine to come up
	I0416 01:00:01.574938   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:01.575371   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:01.575400   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:01.575318   63257 retry.go:31] will retry after 445.10423ms: waiting for machine to come up
	I0416 01:00:02.022081   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:02.022612   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:02.022636   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:02.022570   63257 retry.go:31] will retry after 692.025501ms: waiting for machine to come up
	I0416 01:00:02.716548   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:02.717032   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:02.717061   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:02.716992   63257 retry.go:31] will retry after 735.44304ms: waiting for machine to come up
	I0416 01:00:02.891638   62139 crio.go:462] duration metric: took 1.823700483s to copy over tarball
	I0416 01:00:02.891723   62139 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:00:06.137253   62139 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.245498092s)
	I0416 01:00:06.137283   62139 crio.go:469] duration metric: took 3.245614896s to extract the tarball
	I0416 01:00:06.137292   62139 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:00:06.181260   62139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:06.224646   62139 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 01:00:06.224682   62139 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 01:00:06.224762   62139 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:06.224815   62139 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.224851   62139 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.224821   62139 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.224768   62139 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.224797   62139 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.225121   62139 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0416 01:00:06.224797   62139 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.226485   62139 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.226505   62139 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0416 01:00:06.226516   62139 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.226580   62139 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.226729   62139 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.227296   62139 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:06.227311   62139 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.227315   62139 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.397101   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.431142   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0416 01:00:06.433152   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.433876   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.434844   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.441478   62139 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0416 01:00:06.441524   62139 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.441558   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.450391   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.506375   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.540080   62139 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0416 01:00:06.540250   62139 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0416 01:00:06.540121   62139 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0416 01:00:06.540299   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.540305   62139 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.540343   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613287   62139 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0416 01:00:06.613305   62139 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0416 01:00:06.613334   62139 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.613339   62139 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.613381   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613381   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613490   62139 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0416 01:00:06.613522   62139 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.613569   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613384   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.613620   62139 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0416 01:00:06.613657   62139 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.613716   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0416 01:00:06.613722   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613665   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.619153   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.638065   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.734018   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0416 01:00:06.734134   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.749273   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0416 01:00:06.750536   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0416 01:00:06.750576   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.750655   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0416 01:00:06.750594   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0416 01:00:06.790321   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0416 01:00:06.803564   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0416 01:00:07.060494   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:05.541219   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:05.541261   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:05.541279   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:05.585252   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:05.585284   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:05.950871   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:05.970682   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:05.970725   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:06.450780   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:06.457855   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:06.457888   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:06.950519   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:06.955476   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:06.955505   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:07.451155   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:07.463138   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:07.463172   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:03.453566   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:03.454098   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:03.454131   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:03.454033   63257 retry.go:31] will retry after 838.732671ms: waiting for machine to come up
	I0416 01:00:04.294692   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:04.295209   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:04.295237   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:04.295158   63257 retry.go:31] will retry after 1.302969512s: waiting for machine to come up
	I0416 01:00:05.599886   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:05.600406   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:05.600435   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:05.600378   63257 retry.go:31] will retry after 1.199501225s: waiting for machine to come up
	I0416 01:00:06.801741   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:06.802134   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:06.802153   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:06.802107   63257 retry.go:31] will retry after 1.631018672s: waiting for machine to come up
	I0416 01:00:07.951263   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:07.961911   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:07.961946   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:08.450413   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:08.458651   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:08.458683   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:08.950297   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:08.955847   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 200:
	ok
	I0416 01:00:08.964393   61500 api_server.go:141] control plane version: v1.30.0-rc.2
	I0416 01:00:08.964422   61500 api_server.go:131] duration metric: took 7.01421218s to wait for apiserver health ...
	I0416 01:00:08.964432   61500 cni.go:84] Creating CNI manager for ""
	I0416 01:00:08.964445   61500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:08.966249   61500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:00:07.207951   62139 cache_images.go:92] duration metric: took 983.249797ms to LoadCachedImages
	W0416 01:00:07.286619   62139 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0416 01:00:07.286654   62139 kubeadm.go:928] updating node { 192.168.83.98 8443 v1.20.0 crio true true} ...
	I0416 01:00:07.286815   62139 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-800769 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.98
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 01:00:07.286916   62139 ssh_runner.go:195] Run: crio config
	I0416 01:00:07.338016   62139 cni.go:84] Creating CNI manager for ""
	I0416 01:00:07.338038   62139 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:07.338049   62139 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:00:07.338072   62139 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.98 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-800769 NodeName:old-k8s-version-800769 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.98"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.98 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0416 01:00:07.338207   62139 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.98
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-800769"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.98
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.98"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:00:07.338273   62139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0416 01:00:07.349347   62139 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:00:07.349432   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:00:07.361389   62139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0416 01:00:07.379714   62139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:00:07.397953   62139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0416 01:00:07.416901   62139 ssh_runner.go:195] Run: grep 192.168.83.98	control-plane.minikube.internal$ /etc/hosts
	I0416 01:00:07.420904   62139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.98	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:07.436685   62139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:07.567945   62139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:00:07.587829   62139 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769 for IP: 192.168.83.98
	I0416 01:00:07.587858   62139 certs.go:194] generating shared ca certs ...
	I0416 01:00:07.587880   62139 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:07.588087   62139 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:00:07.588155   62139 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:00:07.588171   62139 certs.go:256] generating profile certs ...
	I0416 01:00:07.606683   62139 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.key
	I0416 01:00:07.606823   62139 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key.efc35655
	I0416 01:00:07.606872   62139 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.key
	I0416 01:00:07.607040   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:00:07.607087   62139 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:00:07.607114   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:00:07.607172   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:00:07.607204   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:00:07.607234   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:00:07.607283   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:07.608127   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:00:07.658868   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:00:07.703378   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:00:07.743203   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:00:07.787335   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0416 01:00:07.823630   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 01:00:07.854198   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:00:07.881813   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 01:00:07.909698   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:00:07.935341   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:00:07.963102   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:00:07.989657   62139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:00:08.009203   62139 ssh_runner.go:195] Run: openssl version
	I0416 01:00:08.015677   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:00:08.027077   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.032096   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.032179   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.038672   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:00:08.054256   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:00:08.065287   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.069846   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.069907   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.075899   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:00:08.087272   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:00:08.098494   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.103168   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.103246   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.109202   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:00:08.120143   62139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:00:08.125027   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 01:00:08.131716   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 01:00:08.138024   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 01:00:08.144291   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 01:00:08.150741   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 01:00:08.156931   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 01:00:08.163147   62139 kubeadm.go:391] StartCluster: {Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.98 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:00:08.163254   62139 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:00:08.163298   62139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:08.201923   62139 cri.go:89] found id: ""
	I0416 01:00:08.202000   62139 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 01:00:08.212441   62139 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 01:00:08.212462   62139 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 01:00:08.212467   62139 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 01:00:08.212514   62139 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 01:00:08.222702   62139 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 01:00:08.223670   62139 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-800769" does not appear in /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:00:08.224332   62139 kubeconfig.go:62] /home/jenkins/minikube-integration/18647-7542/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-800769" cluster setting kubeconfig missing "old-k8s-version-800769" context setting]
	I0416 01:00:08.225340   62139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:08.343775   62139 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 01:00:08.355942   62139 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.98
	I0416 01:00:08.355986   62139 kubeadm.go:1154] stopping kube-system containers ...
	I0416 01:00:08.356007   62139 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 01:00:08.356081   62139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:08.398894   62139 cri.go:89] found id: ""
	I0416 01:00:08.398976   62139 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 01:00:08.416343   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:00:08.426901   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:00:08.426926   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:00:08.426981   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:00:08.437870   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:00:08.437942   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:00:08.452256   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:00:08.466375   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:00:08.466447   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:00:08.477246   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:00:08.487547   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:00:08.487615   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:00:08.504171   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:00:08.515265   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:00:08.515332   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:00:08.525186   62139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:00:08.535381   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:08.657456   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.504421   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.781478   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.950913   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:10.044772   62139 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:10.044871   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:10.545002   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:11.045664   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:11.545083   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:12.045593   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:08.967643   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:00:08.986743   61500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:00:09.011229   61500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:00:09.022810   61500 system_pods.go:59] 8 kube-system pods found
	I0416 01:00:09.022858   61500 system_pods.go:61] "coredns-7db6d8ff4d-xxlkb" [b1ec79ef-e16c-4feb-94ec-5dc85645867f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:00:09.022869   61500 system_pods.go:61] "etcd-no-preload-572602" [f29f3efe-bee4-4d8c-9d49-68008ad50a9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 01:00:09.022881   61500 system_pods.go:61] "kube-apiserver-no-preload-572602" [dd740f94-bfd5-4043-9522-5b8a932690cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 01:00:09.022893   61500 system_pods.go:61] "kube-controller-manager-no-preload-572602" [2778e1a7-a7e3-4ad6-a265-552e78b6b195] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 01:00:09.022901   61500 system_pods.go:61] "kube-proxy-v9fmp" [70ab6236-c758-48eb-85a7-8f7721730a20] Running
	I0416 01:00:09.022908   61500 system_pods.go:61] "kube-scheduler-no-preload-572602" [bb8650bb-657e-49f1-9cee-4437879be44d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 01:00:09.022919   61500 system_pods.go:61] "metrics-server-569cc877fc-llsfr" [ad421803-6236-44df-a15d-c890a3a10dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:00:09.022925   61500 system_pods.go:61] "storage-provisioner" [ec2dd6e2-33db-4888-8945-9879821c92fc] Running
	I0416 01:00:09.022934   61500 system_pods.go:74] duration metric: took 11.661356ms to wait for pod list to return data ...
	I0416 01:00:09.022950   61500 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:00:09.027411   61500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:00:09.027445   61500 node_conditions.go:123] node cpu capacity is 2
	I0416 01:00:09.027459   61500 node_conditions.go:105] duration metric: took 4.503043ms to run NodePressure ...
	I0416 01:00:09.027480   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.307796   61500 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 01:00:09.313534   61500 kubeadm.go:733] kubelet initialised
	I0416 01:00:09.313567   61500 kubeadm.go:734] duration metric: took 5.734401ms waiting for restarted kubelet to initialise ...
	I0416 01:00:09.313580   61500 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:00:09.320900   61500 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.327569   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.327606   61500 pod_ready.go:81] duration metric: took 6.67541ms for pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.327621   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.327633   61500 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.333714   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "etcd-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.333746   61500 pod_ready.go:81] duration metric: took 6.094825ms for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.333759   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "etcd-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.333768   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.338980   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "kube-apiserver-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.339006   61500 pod_ready.go:81] duration metric: took 5.230122ms for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.339017   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "kube-apiserver-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.339033   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.415418   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.415450   61500 pod_ready.go:81] duration metric: took 76.40508ms for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.415462   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.415470   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v9fmp" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.815907   61500 pod_ready.go:92] pod "kube-proxy-v9fmp" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:09.815945   61500 pod_ready.go:81] duration metric: took 400.462786ms for pod "kube-proxy-v9fmp" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.815959   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:11.824269   61500 pod_ready.go:102] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:08.434523   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:08.435039   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:08.435067   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:08.434988   63257 retry.go:31] will retry after 2.819136125s: waiting for machine to come up
	I0416 01:00:11.256238   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:11.256704   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:11.256722   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:11.256664   63257 retry.go:31] will retry after 3.074881299s: waiting for machine to come up
	I0416 01:00:12.545696   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:13.045935   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:13.545810   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.045682   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.545524   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:15.045110   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:15.545792   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:16.045843   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:16.545684   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:17.045401   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.322436   61500 pod_ready.go:102] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:16.821648   61500 pod_ready.go:102] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:14.335004   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:14.335391   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:14.335437   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:14.335343   63257 retry.go:31] will retry after 4.248377683s: waiting for machine to come up
	I0416 01:00:20.014452   61267 start.go:364] duration metric: took 53.932663013s to acquireMachinesLock for "default-k8s-diff-port-653942"
	I0416 01:00:20.014507   61267 start.go:96] Skipping create...Using existing machine configuration
	I0416 01:00:20.014515   61267 fix.go:54] fixHost starting: 
	I0416 01:00:20.014929   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:00:20.014964   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:00:20.033099   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0416 01:00:20.033554   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:00:20.034077   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:00:20.034104   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:00:20.034458   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:00:20.034665   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:20.034812   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:00:20.036559   61267 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653942: state=Stopped err=<nil>
	I0416 01:00:20.036588   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	W0416 01:00:20.036751   61267 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 01:00:20.038774   61267 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-653942" ...
	I0416 01:00:18.588875   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.589320   62747 main.go:141] libmachine: (embed-certs-617092) Found IP for machine: 192.168.61.225
	I0416 01:00:18.589347   62747 main.go:141] libmachine: (embed-certs-617092) Reserving static IP address...
	I0416 01:00:18.589362   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has current primary IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.589699   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "embed-certs-617092", mac: "52:54:00:86:1b:62", ip: "192.168.61.225"} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.589728   62747 main.go:141] libmachine: (embed-certs-617092) Reserved static IP address: 192.168.61.225
	I0416 01:00:18.589752   62747 main.go:141] libmachine: (embed-certs-617092) DBG | skip adding static IP to network mk-embed-certs-617092 - found existing host DHCP lease matching {name: "embed-certs-617092", mac: "52:54:00:86:1b:62", ip: "192.168.61.225"}
	I0416 01:00:18.589771   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Getting to WaitForSSH function...
	I0416 01:00:18.589808   62747 main.go:141] libmachine: (embed-certs-617092) Waiting for SSH to be available...
	I0416 01:00:18.591590   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.591858   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.591885   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.591995   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Using SSH client type: external
	I0416 01:00:18.592027   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa (-rw-------)
	I0416 01:00:18.592058   62747 main.go:141] libmachine: (embed-certs-617092) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 01:00:18.592072   62747 main.go:141] libmachine: (embed-certs-617092) DBG | About to run SSH command:
	I0416 01:00:18.592084   62747 main.go:141] libmachine: (embed-certs-617092) DBG | exit 0
	I0416 01:00:18.717336   62747 main.go:141] libmachine: (embed-certs-617092) DBG | SSH cmd err, output: <nil>: 
	I0416 01:00:18.717759   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetConfigRaw
	I0416 01:00:18.718347   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:18.720640   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.721040   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.721086   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.721300   62747 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/config.json ...
	I0416 01:00:18.721481   62747 machine.go:94] provisionDockerMachine start ...
	I0416 01:00:18.721501   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:18.721700   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:18.723610   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.723924   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.723946   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.724126   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:18.724345   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.724512   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.724616   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:18.724737   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:18.725049   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:18.725199   62747 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 01:00:18.834014   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 01:00:18.834041   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetMachineName
	I0416 01:00:18.834257   62747 buildroot.go:166] provisioning hostname "embed-certs-617092"
	I0416 01:00:18.834280   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetMachineName
	I0416 01:00:18.834495   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:18.836959   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.837282   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.837333   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.837417   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:18.837588   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.837755   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.837962   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:18.838152   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:18.838324   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:18.838342   62747 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-617092 && echo "embed-certs-617092" | sudo tee /etc/hostname
	I0416 01:00:18.959828   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-617092
	
	I0416 01:00:18.959865   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:18.962661   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.962997   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.963029   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.963174   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:18.963351   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.963488   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.963609   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:18.963747   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:18.963949   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:18.963967   62747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-617092' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-617092/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-617092' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 01:00:19.079309   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 01:00:19.079341   62747 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 01:00:19.079400   62747 buildroot.go:174] setting up certificates
	I0416 01:00:19.079409   62747 provision.go:84] configureAuth start
	I0416 01:00:19.079423   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetMachineName
	I0416 01:00:19.079723   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:19.082430   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.082809   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.082838   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.082994   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.085476   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.085802   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.085825   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.085952   62747 provision.go:143] copyHostCerts
	I0416 01:00:19.086006   62747 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 01:00:19.086022   62747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 01:00:19.086077   62747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 01:00:19.086165   62747 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 01:00:19.086174   62747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 01:00:19.086193   62747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 01:00:19.086244   62747 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 01:00:19.086251   62747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 01:00:19.086270   62747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 01:00:19.086336   62747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.embed-certs-617092 san=[127.0.0.1 192.168.61.225 embed-certs-617092 localhost minikube]
	I0416 01:00:19.330622   62747 provision.go:177] copyRemoteCerts
	I0416 01:00:19.330687   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 01:00:19.330712   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.333264   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.333618   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.333645   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.333798   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.333979   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.334122   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.334235   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:19.415820   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0416 01:00:19.442985   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 01:00:19.468427   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 01:00:19.496640   62747 provision.go:87] duration metric: took 417.215523ms to configureAuth
	I0416 01:00:19.496676   62747 buildroot.go:189] setting minikube options for container-runtime
	I0416 01:00:19.496857   62747 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:00:19.496929   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.499561   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.499933   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.499981   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.500132   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.500352   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.500529   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.500671   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.500823   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:19.501026   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:19.501046   62747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 01:00:19.775400   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 01:00:19.775434   62747 machine.go:97] duration metric: took 1.053938445s to provisionDockerMachine
	I0416 01:00:19.775448   62747 start.go:293] postStartSetup for "embed-certs-617092" (driver="kvm2")
	I0416 01:00:19.775462   62747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 01:00:19.775484   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:19.775853   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 01:00:19.775886   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.778961   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.779327   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.779356   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.779510   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.779723   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.779883   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.780008   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:19.865236   62747 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 01:00:19.869769   62747 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 01:00:19.869800   62747 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 01:00:19.869865   62747 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 01:00:19.870010   62747 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 01:00:19.870111   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 01:00:19.880477   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:19.905555   62747 start.go:296] duration metric: took 130.091868ms for postStartSetup
	I0416 01:00:19.905603   62747 fix.go:56] duration metric: took 20.511199999s for fixHost
	I0416 01:00:19.905629   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.908252   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.908593   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.908631   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.908770   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.908972   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.909129   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.909284   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.909448   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:19.909607   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:19.909622   62747 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 01:00:20.014222   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229219.981820926
	
	I0416 01:00:20.014251   62747 fix.go:216] guest clock: 1713229219.981820926
	I0416 01:00:20.014262   62747 fix.go:229] Guest: 2024-04-16 01:00:19.981820926 +0000 UTC Remote: 2024-04-16 01:00:19.90560817 +0000 UTC m=+97.152894999 (delta=76.212756ms)
	I0416 01:00:20.014331   62747 fix.go:200] guest clock delta is within tolerance: 76.212756ms
	I0416 01:00:20.014339   62747 start.go:83] releasing machines lock for "embed-certs-617092", held for 20.619971021s
	I0416 01:00:20.014377   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.014676   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:20.017771   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.018204   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:20.018236   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.018446   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.018991   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.019172   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.019260   62747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 01:00:20.019299   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:20.019439   62747 ssh_runner.go:195] Run: cat /version.json
	I0416 01:00:20.019466   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:20.022283   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.022554   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.022664   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:20.022688   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.022897   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:20.023088   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:20.023150   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:20.023177   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.023281   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:20.023431   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:20.023431   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:20.023791   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:20.023942   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:20.024084   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:20.138251   62747 ssh_runner.go:195] Run: systemctl --version
	I0416 01:00:20.145100   62747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 01:00:20.299049   62747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 01:00:20.307080   62747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 01:00:20.307177   62747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 01:00:20.326056   62747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 01:00:20.326085   62747 start.go:494] detecting cgroup driver to use...
	I0416 01:00:20.326166   62747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 01:00:20.343297   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 01:00:20.358136   62747 docker.go:217] disabling cri-docker service (if available) ...
	I0416 01:00:20.358201   62747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 01:00:20.372936   62747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 01:00:20.387473   62747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 01:00:20.515721   62747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:00:20.680319   62747 docker.go:233] disabling docker service ...
	I0416 01:00:20.680413   62747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:00:20.700816   62747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:00:20.724097   62747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:00:20.885812   62747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:00:21.037890   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:00:21.055670   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:00:21.078466   62747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 01:00:21.078533   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.090135   62747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:00:21.090200   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.106122   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.123844   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.134923   62747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:00:21.153565   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.164751   62747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.184880   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.197711   62747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:00:21.208615   62747 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:00:21.208669   62747 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:00:21.223906   62747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:00:21.234873   62747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:21.405921   62747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:00:21.564833   62747 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:00:21.564918   62747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:00:21.570592   62747 start.go:562] Will wait 60s for crictl version
	I0416 01:00:21.570660   62747 ssh_runner.go:195] Run: which crictl
	I0416 01:00:21.575339   62747 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:00:21.617252   62747 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:00:21.617348   62747 ssh_runner.go:195] Run: crio --version
	I0416 01:00:21.648662   62747 ssh_runner.go:195] Run: crio --version
	I0416 01:00:21.683775   62747 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 01:00:17.544937   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:18.045282   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:18.545707   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:19.045821   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:19.545868   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.045069   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.545134   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:21.045607   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:21.545366   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:22.044998   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.040137   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Start
	I0416 01:00:20.040355   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Ensuring networks are active...
	I0416 01:00:20.041103   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Ensuring network default is active
	I0416 01:00:20.041469   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Ensuring network mk-default-k8s-diff-port-653942 is active
	I0416 01:00:20.041869   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Getting domain xml...
	I0416 01:00:20.042474   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Creating domain...
	I0416 01:00:21.359375   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting to get IP...
	I0416 01:00:21.360333   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.360736   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.360807   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:21.360726   63461 retry.go:31] will retry after 290.970715ms: waiting for machine to come up
	I0416 01:00:21.653420   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.653883   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.653916   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:21.653841   63461 retry.go:31] will retry after 361.304618ms: waiting for machine to come up
	I0416 01:00:22.016540   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.017038   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.017071   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:22.016976   63461 retry.go:31] will retry after 411.249327ms: waiting for machine to come up
	I0416 01:00:18.322778   61500 pod_ready.go:92] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:18.322799   61500 pod_ready.go:81] duration metric: took 8.506833323s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:18.322808   61500 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:20.328344   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:22.331157   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:21.685033   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:21.688407   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:21.688774   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:21.688809   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:21.689010   62747 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0416 01:00:21.693612   62747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:21.707524   62747 kubeadm.go:877] updating cluster {Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:00:21.707657   62747 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 01:00:21.707699   62747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:21.748697   62747 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 01:00:21.748785   62747 ssh_runner.go:195] Run: which lz4
	I0416 01:00:21.753521   62747 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:00:21.758125   62747 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:00:21.758158   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 01:00:22.545403   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:23.045303   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:23.544984   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:24.045882   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:24.545194   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:25.045010   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:25.545278   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:26.045702   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:26.545233   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:27.045814   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:22.429595   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.430124   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.430159   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:22.430087   63461 retry.go:31] will retry after 495.681984ms: waiting for machine to come up
	I0416 01:00:22.927476   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.927932   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.927959   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:22.927875   63461 retry.go:31] will retry after 506.264557ms: waiting for machine to come up
	I0416 01:00:23.435290   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:23.435742   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:23.435773   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:23.435689   63461 retry.go:31] will retry after 826.359716ms: waiting for machine to come up
	I0416 01:00:24.263672   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:24.264151   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:24.264183   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:24.264107   63461 retry.go:31] will retry after 873.35176ms: waiting for machine to come up
	I0416 01:00:25.138864   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:25.139318   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:25.139340   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:25.139308   63461 retry.go:31] will retry after 1.129546887s: waiting for machine to come up
	I0416 01:00:26.270364   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:26.270968   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:26.271000   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:26.270902   63461 retry.go:31] will retry after 1.441466368s: waiting for machine to come up
	I0416 01:00:24.830562   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:26.832057   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:23.353811   62747 crio.go:462] duration metric: took 1.600325005s to copy over tarball
	I0416 01:00:23.353885   62747 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:00:25.815443   62747 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.46152973s)
	I0416 01:00:25.815479   62747 crio.go:469] duration metric: took 2.461639439s to extract the tarball
	I0416 01:00:25.815489   62747 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:00:25.862653   62747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:25.914416   62747 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 01:00:25.914444   62747 cache_images.go:84] Images are preloaded, skipping loading
	I0416 01:00:25.914454   62747 kubeadm.go:928] updating node { 192.168.61.225 8443 v1.29.3 crio true true} ...
	I0416 01:00:25.914586   62747 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-617092 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 01:00:25.914680   62747 ssh_runner.go:195] Run: crio config
	I0416 01:00:25.970736   62747 cni.go:84] Creating CNI manager for ""
	I0416 01:00:25.970760   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:25.970773   62747 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:00:25.970796   62747 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.225 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-617092 NodeName:embed-certs-617092 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 01:00:25.970949   62747 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-617092"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:00:25.971022   62747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 01:00:25.985111   62747 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:00:25.985198   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:00:25.996306   62747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0416 01:00:26.013401   62747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:00:26.030094   62747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0416 01:00:26.048252   62747 ssh_runner.go:195] Run: grep 192.168.61.225	control-plane.minikube.internal$ /etc/hosts
	I0416 01:00:26.052717   62747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:26.069538   62747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:26.205867   62747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:00:26.224210   62747 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092 for IP: 192.168.61.225
	I0416 01:00:26.224237   62747 certs.go:194] generating shared ca certs ...
	I0416 01:00:26.224259   62747 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:26.224459   62747 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:00:26.224520   62747 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:00:26.224532   62747 certs.go:256] generating profile certs ...
	I0416 01:00:26.224646   62747 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/client.key
	I0416 01:00:26.224723   62747 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/apiserver.key.383097d4
	I0416 01:00:26.224773   62747 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/proxy-client.key
	I0416 01:00:26.224932   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:00:26.224973   62747 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:00:26.224982   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:00:26.225014   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:00:26.225050   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:00:26.225085   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:00:26.225126   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:26.225872   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:00:26.282272   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:00:26.329827   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:00:26.366744   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:00:26.405845   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0416 01:00:26.440535   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 01:00:26.465371   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:00:26.491633   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 01:00:26.518682   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:00:26.543992   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:00:26.573728   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:00:26.602308   62747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:00:26.622491   62747 ssh_runner.go:195] Run: openssl version
	I0416 01:00:26.628805   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:00:26.643163   62747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:26.648292   62747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:26.648351   62747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:26.654890   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:00:26.668501   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:00:26.682038   62747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:00:26.687327   62747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:00:26.687388   62747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:00:26.693557   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:00:26.706161   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:00:26.718432   62747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:00:26.722989   62747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:00:26.723050   62747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:00:26.729311   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:00:26.744138   62747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:00:26.749490   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 01:00:26.756478   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 01:00:26.763326   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 01:00:26.770194   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 01:00:26.776641   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 01:00:26.783022   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 01:00:26.789543   62747 kubeadm.go:391] StartCluster: {Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:00:26.789654   62747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:00:26.789717   62747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:26.831148   62747 cri.go:89] found id: ""
	I0416 01:00:26.831219   62747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 01:00:26.844372   62747 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 01:00:26.844398   62747 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 01:00:26.844403   62747 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 01:00:26.844454   62747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 01:00:26.858173   62747 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 01:00:26.859210   62747 kubeconfig.go:125] found "embed-certs-617092" server: "https://192.168.61.225:8443"
	I0416 01:00:26.861233   62747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 01:00:26.874068   62747 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.225
	I0416 01:00:26.874105   62747 kubeadm.go:1154] stopping kube-system containers ...
	I0416 01:00:26.874119   62747 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 01:00:26.874177   62747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:26.926456   62747 cri.go:89] found id: ""
	I0416 01:00:26.926537   62747 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 01:00:26.945874   62747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:00:26.960207   62747 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:00:26.960229   62747 kubeadm.go:156] found existing configuration files:
	
	I0416 01:00:26.960282   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:00:26.971895   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:00:26.971958   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:00:26.982956   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:00:26.993935   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:00:26.994000   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:00:27.005216   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:00:27.015624   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:00:27.015680   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:00:27.026513   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:00:27.037062   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:00:27.037118   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:00:27.048173   62747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:00:27.061987   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:27.190243   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:27.545025   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:28.045752   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:28.545833   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.045264   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.545316   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:30.045594   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:30.545046   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:31.045139   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:31.545251   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:32.045710   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:27.714372   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:27.714822   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:27.714854   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:27.714767   63461 retry.go:31] will retry after 1.810511131s: waiting for machine to come up
	I0416 01:00:29.527497   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:29.528041   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:29.528072   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:29.527983   63461 retry.go:31] will retry after 2.163921338s: waiting for machine to come up
	I0416 01:00:31.694203   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:31.694741   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:31.694769   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:31.694714   63461 retry.go:31] will retry after 2.245150923s: waiting for machine to come up
	I0416 01:00:29.332159   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:31.332218   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:28.252295   62747 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.062013928s)
	I0416 01:00:28.252331   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:28.468110   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:28.553370   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:28.676185   62747 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:28.676273   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.176826   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.676498   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.702138   62747 api_server.go:72] duration metric: took 1.025950998s to wait for apiserver process to appear ...
	I0416 01:00:29.702170   62747 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:00:29.702192   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:29.702822   62747 api_server.go:269] stopped: https://192.168.61.225:8443/healthz: Get "https://192.168.61.225:8443/healthz": dial tcp 192.168.61.225:8443: connect: connection refused
	I0416 01:00:30.203298   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:32.951714   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:32.951754   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:32.951779   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:33.003631   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:33.003672   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:33.202825   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:33.208168   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:33.208201   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:33.702532   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:33.712501   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:33.712542   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:34.203157   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:34.210567   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:34.210597   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:34.702568   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:34.711690   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 200:
	ok
	I0416 01:00:34.723252   62747 api_server.go:141] control plane version: v1.29.3
	I0416 01:00:34.723279   62747 api_server.go:131] duration metric: took 5.021102658s to wait for apiserver health ...
	I0416 01:00:34.723287   62747 cni.go:84] Creating CNI manager for ""
	I0416 01:00:34.723293   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:34.724989   62747 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:00:32.545963   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.045020   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.545657   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:34.045706   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:34.544972   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:35.045252   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:35.545087   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:36.045080   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:36.545787   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:37.045046   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.942412   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:33.942923   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:33.942952   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:33.942870   63461 retry.go:31] will retry after 3.750613392s: waiting for machine to come up
	I0416 01:00:33.829307   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:35.830613   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:34.726400   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:00:34.746294   62747 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:00:34.767028   62747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:00:34.778610   62747 system_pods.go:59] 8 kube-system pods found
	I0416 01:00:34.778653   62747 system_pods.go:61] "coredns-76f75df574-dxzhk" [a71b29ec-8602-47d6-825c-a1a54a1758d0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:00:34.778664   62747 system_pods.go:61] "etcd-embed-certs-617092" [8966501b-6a06-4e0b-acb6-77df5f53cd3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 01:00:34.778674   62747 system_pods.go:61] "kube-apiserver-embed-certs-617092" [7ad29687-3964-4a5b-8939-bcf3dc71d578] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 01:00:34.778685   62747 system_pods.go:61] "kube-controller-manager-embed-certs-617092" [78b21361-f302-43f3-8356-ea15fad4edb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 01:00:34.778695   62747 system_pods.go:61] "kube-proxy-xtdf4" [4e8fe1da-9a02-428e-94f1-595f2e9170e0] Running
	I0416 01:00:34.778703   62747 system_pods.go:61] "kube-scheduler-embed-certs-617092" [c03d87b4-26d3-4bff-8f53-8844260f1ed8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 01:00:34.778720   62747 system_pods.go:61] "metrics-server-57f55c9bc5-knnvn" [4607d12d-25db-4637-be17-e2665970c0a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:00:34.778729   62747 system_pods.go:61] "storage-provisioner" [41362b6c-fde7-45fa-b6cf-1d7acef3d4ce] Running
	I0416 01:00:34.778741   62747 system_pods.go:74] duration metric: took 11.690083ms to wait for pod list to return data ...
	I0416 01:00:34.778755   62747 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:00:34.782283   62747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:00:34.782319   62747 node_conditions.go:123] node cpu capacity is 2
	I0416 01:00:34.782329   62747 node_conditions.go:105] duration metric: took 3.566074ms to run NodePressure ...
	I0416 01:00:34.782344   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:35.056194   62747 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 01:00:35.068546   62747 kubeadm.go:733] kubelet initialised
	I0416 01:00:35.068571   62747 kubeadm.go:734] duration metric: took 12.345347ms waiting for restarted kubelet to initialise ...
	I0416 01:00:35.068581   62747 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:00:35.075013   62747 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-dxzhk" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:37.081976   62747 pod_ready.go:102] pod "coredns-76f75df574-dxzhk" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:37.697323   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.697830   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has current primary IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.697857   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Found IP for machine: 192.168.50.216
	I0416 01:00:37.697873   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Reserving static IP address...
	I0416 01:00:37.698323   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Reserved static IP address: 192.168.50.216
	I0416 01:00:37.698345   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for SSH to be available...
	I0416 01:00:37.698372   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-653942", mac: "52:54:00:4b:a2:47", ip: "192.168.50.216"} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.698418   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | skip adding static IP to network mk-default-k8s-diff-port-653942 - found existing host DHCP lease matching {name: "default-k8s-diff-port-653942", mac: "52:54:00:4b:a2:47", ip: "192.168.50.216"}
	I0416 01:00:37.698450   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Getting to WaitForSSH function...
	I0416 01:00:37.700942   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.701312   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.701346   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.701520   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Using SSH client type: external
	I0416 01:00:37.701567   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa (-rw-------)
	I0416 01:00:37.701621   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 01:00:37.701676   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | About to run SSH command:
	I0416 01:00:37.701712   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | exit 0
	I0416 01:00:37.829860   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | SSH cmd err, output: <nil>: 
	I0416 01:00:37.830254   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetConfigRaw
	I0416 01:00:37.830931   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:37.833361   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.833755   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.833788   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.834026   61267 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/config.json ...
	I0416 01:00:37.834198   61267 machine.go:94] provisionDockerMachine start ...
	I0416 01:00:37.834214   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:37.834426   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:37.836809   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.837221   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.837251   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.837377   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:37.837588   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.837737   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.837869   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:37.838023   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:37.838208   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:37.838219   61267 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 01:00:37.950999   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 01:00:37.951031   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 01:00:37.951271   61267 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653942"
	I0416 01:00:37.951303   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 01:00:37.951483   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:37.954395   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.954730   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.954755   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.954949   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:37.955165   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.955344   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.955549   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:37.955756   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:37.955980   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:37.956001   61267 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-653942 && echo "default-k8s-diff-port-653942" | sudo tee /etc/hostname
	I0416 01:00:38.085650   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-653942
	
	I0416 01:00:38.085682   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.088689   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.089031   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.089060   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.089297   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.089474   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.089623   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.089780   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.089948   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:38.090127   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:38.090146   61267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-653942' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-653942/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-653942' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 01:00:38.214653   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 01:00:38.214734   61267 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 01:00:38.214760   61267 buildroot.go:174] setting up certificates
	I0416 01:00:38.214773   61267 provision.go:84] configureAuth start
	I0416 01:00:38.214785   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 01:00:38.215043   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:38.217744   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.218145   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.218174   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.218336   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.220861   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.221187   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.221216   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.221343   61267 provision.go:143] copyHostCerts
	I0416 01:00:38.221405   61267 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 01:00:38.221426   61267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 01:00:38.221492   61267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 01:00:38.221638   61267 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 01:00:38.221649   61267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 01:00:38.221685   61267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 01:00:38.221777   61267 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 01:00:38.221787   61267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 01:00:38.221815   61267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 01:00:38.221887   61267 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-653942 san=[127.0.0.1 192.168.50.216 default-k8s-diff-port-653942 localhost minikube]
	I0416 01:00:38.266327   61267 provision.go:177] copyRemoteCerts
	I0416 01:00:38.266390   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 01:00:38.266422   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.269080   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.269546   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.269583   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.269901   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.270115   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.270259   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.270444   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:38.352861   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 01:00:38.380995   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0416 01:00:38.405746   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 01:00:38.431467   61267 provision.go:87] duration metric: took 216.680985ms to configureAuth
	I0416 01:00:38.431502   61267 buildroot.go:189] setting minikube options for container-runtime
	I0416 01:00:38.431674   61267 config.go:182] Loaded profile config "default-k8s-diff-port-653942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:00:38.431740   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.434444   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.434867   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.434909   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.435032   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.435245   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.435380   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.435568   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.435744   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:38.435948   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:38.435974   61267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 01:00:38.729392   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 01:00:38.729421   61267 machine.go:97] duration metric: took 895.211347ms to provisionDockerMachine
	I0416 01:00:38.729432   61267 start.go:293] postStartSetup for "default-k8s-diff-port-653942" (driver="kvm2")
	I0416 01:00:38.729442   61267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 01:00:38.729463   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.729802   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 01:00:38.729826   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.732755   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.733135   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.733181   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.733326   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.733490   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.733649   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.733784   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:38.819006   61267 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 01:00:38.823781   61267 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 01:00:38.823804   61267 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 01:00:38.823870   61267 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 01:00:38.823967   61267 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 01:00:38.824077   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 01:00:38.833958   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:38.859934   61267 start.go:296] duration metric: took 130.488205ms for postStartSetup
	I0416 01:00:38.859973   61267 fix.go:56] duration metric: took 18.845458863s for fixHost
	I0416 01:00:38.859992   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.862557   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.862889   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.862927   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.863016   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.863236   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.863426   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.863609   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.863786   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:38.863951   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:38.863961   61267 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 01:00:38.970405   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229238.936521840
	
	I0416 01:00:38.970431   61267 fix.go:216] guest clock: 1713229238.936521840
	I0416 01:00:38.970440   61267 fix.go:229] Guest: 2024-04-16 01:00:38.93652184 +0000 UTC Remote: 2024-04-16 01:00:38.859976379 +0000 UTC m=+356.490123424 (delta=76.545461ms)
	I0416 01:00:38.970489   61267 fix.go:200] guest clock delta is within tolerance: 76.545461ms
	I0416 01:00:38.970496   61267 start.go:83] releasing machines lock for "default-k8s-diff-port-653942", held for 18.956013216s
	I0416 01:00:38.970522   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.970806   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:38.973132   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.973440   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.973455   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.973646   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.974142   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.974332   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.974388   61267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 01:00:38.974432   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.974532   61267 ssh_runner.go:195] Run: cat /version.json
	I0416 01:00:38.974556   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.977284   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977459   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977624   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.977653   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977746   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.977774   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977800   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.978002   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.978017   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.978163   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.978169   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.978296   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:38.978314   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.978440   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:39.090827   61267 ssh_runner.go:195] Run: systemctl --version
	I0416 01:00:39.097716   61267 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 01:00:39.249324   61267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 01:00:39.256333   61267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 01:00:39.256402   61267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 01:00:39.272367   61267 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 01:00:39.272395   61267 start.go:494] detecting cgroup driver to use...
	I0416 01:00:39.272446   61267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 01:00:39.291713   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 01:00:39.305645   61267 docker.go:217] disabling cri-docker service (if available) ...
	I0416 01:00:39.305708   61267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 01:00:39.320731   61267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 01:00:39.336917   61267 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 01:00:39.450840   61267 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:00:39.596905   61267 docker.go:233] disabling docker service ...
	I0416 01:00:39.596972   61267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:00:39.612926   61267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:00:39.627583   61267 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:00:39.778135   61267 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:00:39.900216   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:00:39.914697   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:00:39.935875   61267 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 01:00:39.935930   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.946510   61267 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:00:39.946569   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.956794   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.966968   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.977207   61267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:00:39.988817   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:40.001088   61267 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:40.018950   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:40.030395   61267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:00:40.039956   61267 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:00:40.040013   61267 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:00:40.053877   61267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:00:40.065292   61267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:40.221527   61267 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:00:40.382800   61267 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:00:40.382880   61267 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:00:40.387842   61267 start.go:562] Will wait 60s for crictl version
	I0416 01:00:40.387897   61267 ssh_runner.go:195] Run: which crictl
	I0416 01:00:40.393774   61267 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:00:40.435784   61267 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:00:40.435864   61267 ssh_runner.go:195] Run: crio --version
	I0416 01:00:40.468702   61267 ssh_runner.go:195] Run: crio --version
	I0416 01:00:40.501355   61267 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 01:00:37.545192   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:38.045346   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:38.545599   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:39.045109   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:39.545360   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.045058   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.545745   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:41.045943   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:41.545900   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:42.045807   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.502716   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:40.505958   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:40.506353   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:40.506384   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:40.506597   61267 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0416 01:00:40.511238   61267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:40.525378   61267 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-653942 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-653942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.216 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:00:40.525519   61267 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 01:00:40.525586   61267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:40.570378   61267 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 01:00:40.570451   61267 ssh_runner.go:195] Run: which lz4
	I0416 01:00:40.575413   61267 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:00:40.580583   61267 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:00:40.580640   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 01:00:42.194745   61267 crio.go:462] duration metric: took 1.619375861s to copy over tarball
	I0416 01:00:42.194821   61267 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:00:37.830710   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:39.831822   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:42.330821   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:39.086761   62747 pod_ready.go:102] pod "coredns-76f75df574-dxzhk" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:40.082847   62747 pod_ready.go:92] pod "coredns-76f75df574-dxzhk" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:40.082868   62747 pod_ready.go:81] duration metric: took 5.007825454s for pod "coredns-76f75df574-dxzhk" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:40.082877   62747 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:42.092402   62747 pod_ready.go:92] pod "etcd-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:42.092425   62747 pod_ready.go:81] duration metric: took 2.009541778s for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:42.092438   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:42.545278   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:43.045894   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:43.545886   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.044964   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.544997   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:45.045340   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:45.545257   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:46.045108   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:46.544994   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:47.045987   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.671272   61267 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.476407392s)
	I0416 01:00:44.671304   61267 crio.go:469] duration metric: took 2.476532286s to extract the tarball
	I0416 01:00:44.671315   61267 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:00:44.709451   61267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:44.754382   61267 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 01:00:44.754412   61267 cache_images.go:84] Images are preloaded, skipping loading
	I0416 01:00:44.754424   61267 kubeadm.go:928] updating node { 192.168.50.216 8444 v1.29.3 crio true true} ...
	I0416 01:00:44.754543   61267 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-653942 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-653942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 01:00:44.754613   61267 ssh_runner.go:195] Run: crio config
	I0416 01:00:44.806896   61267 cni.go:84] Creating CNI manager for ""
	I0416 01:00:44.806918   61267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:44.806926   61267 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:00:44.806957   61267 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.216 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-653942 NodeName:default-k8s-diff-port-653942 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 01:00:44.807089   61267 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.216
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-653942"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.216
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.216"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:00:44.807144   61267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 01:00:44.821347   61267 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:00:44.821425   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:00:44.835415   61267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0416 01:00:44.855797   61267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:00:44.873694   61267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0416 01:00:44.892535   61267 ssh_runner.go:195] Run: grep 192.168.50.216	control-plane.minikube.internal$ /etc/hosts
	I0416 01:00:44.896538   61267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:44.909516   61267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:45.024588   61267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:00:45.055414   61267 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942 for IP: 192.168.50.216
	I0416 01:00:45.055440   61267 certs.go:194] generating shared ca certs ...
	I0416 01:00:45.055460   61267 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:45.055622   61267 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:00:45.055680   61267 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:00:45.055695   61267 certs.go:256] generating profile certs ...
	I0416 01:00:45.055815   61267 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.key
	I0416 01:00:45.055905   61267 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/apiserver.key.6620f6bf
	I0416 01:00:45.055975   61267 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/proxy-client.key
	I0416 01:00:45.056139   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:00:45.056185   61267 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:00:45.056195   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:00:45.056234   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:00:45.056268   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:00:45.056295   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:00:45.056355   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:45.057033   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:00:45.091704   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:00:45.154257   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:00:45.181077   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:00:45.222401   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0416 01:00:45.248568   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 01:00:45.277927   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:00:45.310417   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 01:00:45.341109   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:00:45.367056   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:00:45.395117   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:00:45.421921   61267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:00:45.440978   61267 ssh_runner.go:195] Run: openssl version
	I0416 01:00:45.447132   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:00:45.460008   61267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:00:45.464820   61267 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:00:45.464884   61267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:00:45.471232   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:00:45.482567   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:00:45.493541   61267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:45.498792   61267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:45.498849   61267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:45.505511   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:00:45.517533   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:00:45.529908   61267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:00:45.535120   61267 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:00:45.535181   61267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:00:45.541232   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:00:45.552946   61267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:00:45.559947   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 01:00:45.567567   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 01:00:45.575204   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 01:00:45.582057   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 01:00:45.588418   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 01:00:45.595517   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 01:00:45.602108   61267 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-653942 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-653942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.216 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:00:45.602213   61267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:00:45.602256   61267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:45.639538   61267 cri.go:89] found id: ""
	I0416 01:00:45.639621   61267 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 01:00:45.651216   61267 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 01:00:45.651245   61267 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 01:00:45.651252   61267 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 01:00:45.651307   61267 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 01:00:45.662522   61267 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 01:00:45.663697   61267 kubeconfig.go:125] found "default-k8s-diff-port-653942" server: "https://192.168.50.216:8444"
	I0416 01:00:45.666034   61267 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 01:00:45.675864   61267 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.216
	I0416 01:00:45.675900   61267 kubeadm.go:1154] stopping kube-system containers ...
	I0416 01:00:45.675927   61267 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 01:00:45.675992   61267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:45.718679   61267 cri.go:89] found id: ""
	I0416 01:00:45.718744   61267 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 01:00:45.737326   61267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:00:45.748122   61267 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:00:45.748146   61267 kubeadm.go:156] found existing configuration files:
	
	I0416 01:00:45.748200   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0416 01:00:45.758556   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:00:45.758618   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:00:45.769601   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0416 01:00:45.779361   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:00:45.779424   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:00:45.789283   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0416 01:00:45.798712   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:00:45.798805   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:00:45.808489   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0416 01:00:45.817400   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:00:45.817469   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:00:45.827902   61267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:00:45.838031   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:45.962948   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:46.862340   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:47.092144   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:47.170078   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:47.284634   61267 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:47.284719   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.830534   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:47.474148   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:44.100441   62747 pod_ready.go:102] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:47.472666   62747 pod_ready.go:102] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:47.599694   62747 pod_ready.go:92] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.599722   62747 pod_ready.go:81] duration metric: took 5.507276982s for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.599734   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.604479   62747 pod_ready.go:92] pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.604496   62747 pod_ready.go:81] duration metric: took 4.755735ms for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.604504   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xtdf4" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.608936   62747 pod_ready.go:92] pod "kube-proxy-xtdf4" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.608951   62747 pod_ready.go:81] duration metric: took 4.441482ms for pod "kube-proxy-xtdf4" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.608959   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.613108   62747 pod_ready.go:92] pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.613123   62747 pod_ready.go:81] duration metric: took 4.157722ms for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.613130   62747 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.545567   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.045898   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.545631   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:49.045678   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:49.545274   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:50.045281   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:50.545926   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:51.045076   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:51.545303   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:52.045271   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:47.785698   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.284828   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.315894   61267 api_server.go:72] duration metric: took 1.031258915s to wait for apiserver process to appear ...
	I0416 01:00:48.315925   61267 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:00:48.315950   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:51.781922   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:51.781957   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:51.781976   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:51.830460   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:51.830491   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:51.830505   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:51.858205   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:51.858240   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:52.316376   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:52.332667   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:52.332700   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:49.829236   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:52.329805   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:49.620626   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:51.620730   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:52.816565   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:52.827158   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:52.827191   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:53.316864   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:53.321112   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 200:
	ok
	I0416 01:00:53.329289   61267 api_server.go:141] control plane version: v1.29.3
	I0416 01:00:53.329320   61267 api_server.go:131] duration metric: took 5.013387579s to wait for apiserver health ...
	I0416 01:00:53.329331   61267 cni.go:84] Creating CNI manager for ""
	I0416 01:00:53.329340   61267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:53.331125   61267 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:00:52.545407   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.044961   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.545290   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:54.044994   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:54.545292   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:55.045285   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:55.545909   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:56.045029   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:56.545343   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:57.044988   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.332626   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:00:53.366364   61267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:00:53.401881   61267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:00:53.413478   61267 system_pods.go:59] 8 kube-system pods found
	I0416 01:00:53.413512   61267 system_pods.go:61] "coredns-76f75df574-cvlpq" [c200d470-26dd-40ea-a79b-29d9104122bb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:00:53.413527   61267 system_pods.go:61] "etcd-default-k8s-diff-port-653942" [24e85fc2-fb57-4ef6-9817-846207109e61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 01:00:53.413537   61267 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653942" [bd473e94-72a6-4391-b787-49e16e8a213f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 01:00:53.413547   61267 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653942" [31ed7183-a12b-422c-9e67-bba91147347a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 01:00:53.413555   61267 system_pods.go:61] "kube-proxy-6q9k7" [ba6d9cf9-37a5-4e01-9489-ce7395fd2a38] Running
	I0416 01:00:53.413563   61267 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653942" [4b481275-4ded-4251-963f-910954f10d15] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 01:00:53.413579   61267 system_pods.go:61] "metrics-server-57f55c9bc5-9cnv2" [24905ded-5bf8-4b34-8069-2e65c5ad8f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:00:53.413592   61267 system_pods.go:61] "storage-provisioner" [16ba28d0-2031-4c21-9c22-1b9289517449] Running
	I0416 01:00:53.413601   61267 system_pods.go:74] duration metric: took 11.695334ms to wait for pod list to return data ...
	I0416 01:00:53.413613   61267 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:00:53.417579   61267 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:00:53.417609   61267 node_conditions.go:123] node cpu capacity is 2
	I0416 01:00:53.417623   61267 node_conditions.go:105] duration metric: took 4.002735ms to run NodePressure ...
	I0416 01:00:53.417642   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:53.688389   61267 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 01:00:53.692755   61267 kubeadm.go:733] kubelet initialised
	I0416 01:00:53.692777   61267 kubeadm.go:734] duration metric: took 4.359298ms waiting for restarted kubelet to initialise ...
	I0416 01:00:53.692784   61267 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:00:53.698521   61267 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-cvlpq" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.704496   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "coredns-76f75df574-cvlpq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.704532   61267 pod_ready.go:81] duration metric: took 5.98382ms for pod "coredns-76f75df574-cvlpq" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.704543   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "coredns-76f75df574-cvlpq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.704550   61267 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.713110   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.713144   61267 pod_ready.go:81] duration metric: took 8.58568ms for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.713188   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.713201   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.718190   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.718210   61267 pod_ready.go:81] duration metric: took 4.997527ms for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.718219   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.718224   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.805697   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.805727   61267 pod_ready.go:81] duration metric: took 87.493805ms for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.805738   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.805743   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6q9k7" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:54.205884   61267 pod_ready.go:92] pod "kube-proxy-6q9k7" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:54.205911   61267 pod_ready.go:81] duration metric: took 400.161115ms for pod "kube-proxy-6q9k7" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:54.205921   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:56.213276   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:54.829391   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:57.330218   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:54.119995   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:56.121220   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:57.545333   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.045305   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.545871   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:59.045432   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:59.545000   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:00.045001   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:00.545855   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:01.045812   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:01.545477   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:02.045635   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.215064   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:00.215192   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:59.330599   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:01.831017   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:58.620594   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:01.120516   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:02.545690   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:03.045754   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:03.544965   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:04.045062   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:04.545196   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:05.045986   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:05.545246   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:06.045853   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:06.545863   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:07.045209   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:02.712971   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:04.713437   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:07.212886   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:04.328673   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:06.329726   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:03.124343   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:05.619912   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:07.622044   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:07.544952   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:08.045290   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:08.545296   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:09.045795   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:09.545932   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:10.045124   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:10.045209   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:10.087200   62139 cri.go:89] found id: ""
	I0416 01:01:10.087229   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.087237   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:10.087243   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:10.087300   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:10.126194   62139 cri.go:89] found id: ""
	I0416 01:01:10.126218   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.126225   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:10.126230   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:10.126275   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:10.165238   62139 cri.go:89] found id: ""
	I0416 01:01:10.165271   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.165282   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:10.165290   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:10.165357   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:10.202896   62139 cri.go:89] found id: ""
	I0416 01:01:10.202934   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.202945   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:10.202952   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:10.203015   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:10.243576   62139 cri.go:89] found id: ""
	I0416 01:01:10.243605   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.243613   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:10.243619   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:10.243667   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:10.278637   62139 cri.go:89] found id: ""
	I0416 01:01:10.278661   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.278669   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:10.278674   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:10.278726   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:10.316811   62139 cri.go:89] found id: ""
	I0416 01:01:10.316844   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.316852   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:10.316857   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:10.316914   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:10.359934   62139 cri.go:89] found id: ""
	I0416 01:01:10.359960   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.359967   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:10.359975   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:10.359987   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:10.413082   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:10.413119   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:10.428605   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:10.428632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:10.552536   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:10.552561   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:10.552578   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:10.615054   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:10.615091   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:08.213557   61267 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:01:08.213584   61267 pod_ready.go:81] duration metric: took 14.007657025s for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:01:08.213594   61267 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace to be "Ready" ...
	I0416 01:01:10.224984   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:08.831515   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:11.330529   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:10.122213   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:12.621939   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:13.160749   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:13.178449   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:13.178505   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:13.224192   62139 cri.go:89] found id: ""
	I0416 01:01:13.224215   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.224222   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:13.224228   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:13.224287   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:13.261441   62139 cri.go:89] found id: ""
	I0416 01:01:13.261469   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.261476   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:13.261481   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:13.261545   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:13.296602   62139 cri.go:89] found id: ""
	I0416 01:01:13.296636   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.296647   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:13.296654   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:13.296720   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:13.333944   62139 cri.go:89] found id: ""
	I0416 01:01:13.333968   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.333977   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:13.333984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:13.334049   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:13.372919   62139 cri.go:89] found id: ""
	I0416 01:01:13.372944   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.372957   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:13.372965   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:13.373022   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:13.413257   62139 cri.go:89] found id: ""
	I0416 01:01:13.413287   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.413299   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:13.413306   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:13.413373   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:13.451705   62139 cri.go:89] found id: ""
	I0416 01:01:13.451737   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.451748   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:13.451755   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:13.451836   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:13.492549   62139 cri.go:89] found id: ""
	I0416 01:01:13.492576   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.492586   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:13.492597   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:13.492613   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:13.547267   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:13.547303   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:13.568975   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:13.569002   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:13.674444   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:13.674469   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:13.674482   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:13.745111   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:13.745145   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:16.286955   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:16.301151   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:16.301257   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:16.337516   62139 cri.go:89] found id: ""
	I0416 01:01:16.337544   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.337554   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:16.337561   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:16.337623   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:16.372674   62139 cri.go:89] found id: ""
	I0416 01:01:16.372702   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.372712   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:16.372720   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:16.372783   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:16.411181   62139 cri.go:89] found id: ""
	I0416 01:01:16.411208   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.411224   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:16.411230   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:16.411283   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:16.449063   62139 cri.go:89] found id: ""
	I0416 01:01:16.449102   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.449109   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:16.449114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:16.449183   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:16.491877   62139 cri.go:89] found id: ""
	I0416 01:01:16.491909   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.491918   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:16.491924   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:16.491981   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:16.532522   62139 cri.go:89] found id: ""
	I0416 01:01:16.532553   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.532564   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:16.532572   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:16.532633   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:16.572194   62139 cri.go:89] found id: ""
	I0416 01:01:16.572222   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.572233   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:16.572240   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:16.572302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:16.614671   62139 cri.go:89] found id: ""
	I0416 01:01:16.614697   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.614704   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:16.614712   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:16.614726   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:16.632146   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:16.632179   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:16.707597   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:16.707621   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:16.707633   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:16.783604   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:16.783640   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:16.828937   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:16.828977   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:12.721088   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:15.220256   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:17.222263   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:13.830983   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:16.329120   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:15.119386   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:17.120038   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:19.385008   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:19.400949   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:19.401035   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:19.463792   62139 cri.go:89] found id: ""
	I0416 01:01:19.463825   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.463836   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:19.463843   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:19.463910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:19.523289   62139 cri.go:89] found id: ""
	I0416 01:01:19.523322   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.523332   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:19.523340   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:19.523392   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:19.558891   62139 cri.go:89] found id: ""
	I0416 01:01:19.558928   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.558939   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:19.558946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:19.559009   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:19.597876   62139 cri.go:89] found id: ""
	I0416 01:01:19.597905   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.597917   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:19.597925   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:19.597980   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:19.637536   62139 cri.go:89] found id: ""
	I0416 01:01:19.637563   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.637571   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:19.637576   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:19.637623   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:19.674414   62139 cri.go:89] found id: ""
	I0416 01:01:19.674447   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.674458   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:19.674465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:19.674525   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:19.709717   62139 cri.go:89] found id: ""
	I0416 01:01:19.709751   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.709761   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:19.709769   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:19.709837   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:19.747458   62139 cri.go:89] found id: ""
	I0416 01:01:19.747482   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.747489   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:19.747505   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:19.747523   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:19.834811   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:19.834846   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:19.876398   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:19.876428   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:19.931596   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:19.931632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:19.947074   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:19.947103   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:20.023434   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:19.720883   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:21.721969   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:18.829276   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:20.829405   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:19.120254   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:21.120520   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:22.524036   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:22.539399   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:22.539488   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:22.574696   62139 cri.go:89] found id: ""
	I0416 01:01:22.574723   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.574733   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:22.574741   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:22.574805   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:22.617474   62139 cri.go:89] found id: ""
	I0416 01:01:22.617503   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.617514   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:22.617521   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:22.617579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:22.657744   62139 cri.go:89] found id: ""
	I0416 01:01:22.657773   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.657781   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:22.657786   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:22.657842   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:22.695513   62139 cri.go:89] found id: ""
	I0416 01:01:22.695544   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.695552   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:22.695557   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:22.695606   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:22.732943   62139 cri.go:89] found id: ""
	I0416 01:01:22.732973   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.732983   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:22.732990   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:22.733051   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:22.768735   62139 cri.go:89] found id: ""
	I0416 01:01:22.768767   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.768775   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:22.768782   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:22.768842   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:22.804330   62139 cri.go:89] found id: ""
	I0416 01:01:22.804352   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.804361   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:22.804367   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:22.804425   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:22.842165   62139 cri.go:89] found id: ""
	I0416 01:01:22.842192   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.842199   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:22.842207   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:22.842219   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:22.921859   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:22.921880   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:22.921893   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:23.003432   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:23.003468   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:23.045446   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:23.045476   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:23.097327   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:23.097358   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:25.612297   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:25.627489   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:25.627565   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:25.664040   62139 cri.go:89] found id: ""
	I0416 01:01:25.664072   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.664083   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:25.664091   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:25.664149   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:25.701004   62139 cri.go:89] found id: ""
	I0416 01:01:25.701029   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.701036   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:25.701042   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:25.701087   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:25.740108   62139 cri.go:89] found id: ""
	I0416 01:01:25.740136   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.740144   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:25.740150   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:25.740194   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:25.778413   62139 cri.go:89] found id: ""
	I0416 01:01:25.778447   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.778458   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:25.778465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:25.778530   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:25.815188   62139 cri.go:89] found id: ""
	I0416 01:01:25.815215   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.815223   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:25.815230   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:25.815277   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:25.856370   62139 cri.go:89] found id: ""
	I0416 01:01:25.856402   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.856410   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:25.856416   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:25.856476   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:25.895363   62139 cri.go:89] found id: ""
	I0416 01:01:25.895388   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.895396   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:25.895402   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:25.895455   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:25.931854   62139 cri.go:89] found id: ""
	I0416 01:01:25.931881   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.931889   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:25.931897   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:25.931923   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:26.008395   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:26.008419   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:26.008436   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:26.087946   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:26.087983   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:26.134693   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:26.134725   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:26.189618   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:26.189652   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:24.220798   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:26.221193   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:22.833917   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:25.331147   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:27.331702   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:23.620819   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:25.621119   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:28.705010   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:28.719575   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:28.719644   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:28.759011   62139 cri.go:89] found id: ""
	I0416 01:01:28.759037   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.759044   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:28.759050   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:28.759112   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:28.794640   62139 cri.go:89] found id: ""
	I0416 01:01:28.794675   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.794687   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:28.794695   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:28.794807   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:28.835634   62139 cri.go:89] found id: ""
	I0416 01:01:28.835663   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.835674   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:28.835681   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:28.835747   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:28.875384   62139 cri.go:89] found id: ""
	I0416 01:01:28.875408   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.875426   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:28.875433   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:28.875484   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:28.921202   62139 cri.go:89] found id: ""
	I0416 01:01:28.921234   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.921244   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:28.921252   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:28.921314   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:28.958791   62139 cri.go:89] found id: ""
	I0416 01:01:28.958820   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.958828   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:28.958834   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:28.958923   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:28.996136   62139 cri.go:89] found id: ""
	I0416 01:01:28.996168   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.996179   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:28.996185   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:28.996259   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:29.033912   62139 cri.go:89] found id: ""
	I0416 01:01:29.033939   62139 logs.go:276] 0 containers: []
	W0416 01:01:29.033946   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:29.033954   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:29.033969   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:29.114162   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:29.114209   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:29.153934   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:29.153965   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:29.207548   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:29.207584   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:29.222158   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:29.222184   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:29.297414   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:31.798026   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:31.812740   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:31.812815   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:31.855058   62139 cri.go:89] found id: ""
	I0416 01:01:31.855087   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.855098   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:31.855105   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:31.855172   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:31.897128   62139 cri.go:89] found id: ""
	I0416 01:01:31.897170   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.897192   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:31.897200   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:31.897259   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:31.934497   62139 cri.go:89] found id: ""
	I0416 01:01:31.934520   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.934532   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:31.934541   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:31.934588   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:31.974020   62139 cri.go:89] found id: ""
	I0416 01:01:31.974051   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.974062   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:31.974093   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:31.974163   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:32.015433   62139 cri.go:89] found id: ""
	I0416 01:01:32.015460   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.015471   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:32.015477   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:32.015540   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:32.058286   62139 cri.go:89] found id: ""
	I0416 01:01:32.058336   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.058345   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:32.058351   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:32.058408   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:28.720596   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:30.720732   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:29.828996   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:31.830765   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:28.121038   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:30.619604   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:32.620210   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:32.100331   62139 cri.go:89] found id: ""
	I0416 01:01:32.102041   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.102054   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:32.102061   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:32.102115   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:32.141420   62139 cri.go:89] found id: ""
	I0416 01:01:32.141446   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.141454   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:32.141462   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:32.141473   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:32.195323   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:32.195364   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:32.210180   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:32.210206   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:32.282548   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:32.282570   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:32.282585   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:32.360627   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:32.360663   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:34.901239   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:34.917097   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:34.917205   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:34.959297   62139 cri.go:89] found id: ""
	I0416 01:01:34.959327   62139 logs.go:276] 0 containers: []
	W0416 01:01:34.959337   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:34.959344   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:34.959422   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:35.000927   62139 cri.go:89] found id: ""
	I0416 01:01:35.000974   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.000984   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:35.001000   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:35.001064   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:35.038049   62139 cri.go:89] found id: ""
	I0416 01:01:35.038073   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.038082   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:35.038090   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:35.038143   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:35.075396   62139 cri.go:89] found id: ""
	I0416 01:01:35.075467   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.075481   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:35.075490   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:35.075591   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:35.114297   62139 cri.go:89] found id: ""
	I0416 01:01:35.114325   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.114335   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:35.114343   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:35.114405   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:35.152075   62139 cri.go:89] found id: ""
	I0416 01:01:35.152099   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.152106   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:35.152112   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:35.152161   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:35.187945   62139 cri.go:89] found id: ""
	I0416 01:01:35.187974   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.187984   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:35.187991   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:35.188057   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:35.225225   62139 cri.go:89] found id: ""
	I0416 01:01:35.225253   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.225262   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:35.225272   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:35.225287   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:35.279584   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:35.279628   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:35.293416   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:35.293456   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:35.370122   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:35.370147   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:35.370159   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:35.451482   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:35.451517   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:32.723226   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:35.221390   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:34.329009   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:36.329761   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:34.620492   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:36.620527   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:37.994358   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:38.008209   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:38.008277   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:38.047905   62139 cri.go:89] found id: ""
	I0416 01:01:38.047943   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.047955   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:38.047962   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:38.048016   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:38.085749   62139 cri.go:89] found id: ""
	I0416 01:01:38.085780   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.085790   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:38.085797   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:38.085864   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:38.122396   62139 cri.go:89] found id: ""
	I0416 01:01:38.122419   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.122427   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:38.122432   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:38.122479   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:38.159284   62139 cri.go:89] found id: ""
	I0416 01:01:38.159313   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.159322   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:38.159329   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:38.159390   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:38.193245   62139 cri.go:89] found id: ""
	I0416 01:01:38.193280   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.193291   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:38.193298   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:38.193362   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:38.229147   62139 cri.go:89] found id: ""
	I0416 01:01:38.229179   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.229188   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:38.229194   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:38.229251   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:38.267285   62139 cri.go:89] found id: ""
	I0416 01:01:38.267309   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.267317   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:38.267321   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:38.267389   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:38.305181   62139 cri.go:89] found id: ""
	I0416 01:01:38.305207   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.305215   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:38.305222   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:38.305237   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:38.321714   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:38.321742   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:38.398352   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:38.398372   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:38.398382   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:38.474095   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:38.474129   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:38.520540   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:38.520581   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:41.072083   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:41.086767   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:41.086860   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:41.125119   62139 cri.go:89] found id: ""
	I0416 01:01:41.125149   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.125175   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:41.125182   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:41.125253   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:41.159885   62139 cri.go:89] found id: ""
	I0416 01:01:41.159915   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.159925   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:41.159931   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:41.160012   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:41.196334   62139 cri.go:89] found id: ""
	I0416 01:01:41.196366   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.196377   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:41.196385   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:41.196447   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:41.234254   62139 cri.go:89] found id: ""
	I0416 01:01:41.234282   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.234300   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:41.234319   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:41.234413   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:41.271499   62139 cri.go:89] found id: ""
	I0416 01:01:41.271523   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.271531   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:41.271536   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:41.271604   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:41.311064   62139 cri.go:89] found id: ""
	I0416 01:01:41.311096   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.311107   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:41.311114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:41.311179   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:41.349012   62139 cri.go:89] found id: ""
	I0416 01:01:41.349043   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.349053   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:41.349060   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:41.349117   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:41.385258   62139 cri.go:89] found id: ""
	I0416 01:01:41.385298   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.385305   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:41.385315   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:41.385330   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:41.470086   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:41.470130   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:41.513835   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:41.513870   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:41.565980   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:41.566013   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:41.582647   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:41.582678   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:41.658928   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:37.724628   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:40.222025   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:38.329899   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:40.330143   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:39.120850   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:41.121383   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:44.159107   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:44.173015   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:44.173088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:44.214310   62139 cri.go:89] found id: ""
	I0416 01:01:44.214345   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.214363   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:44.214374   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:44.214462   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:44.256476   62139 cri.go:89] found id: ""
	I0416 01:01:44.256503   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.256511   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:44.256516   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:44.256577   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:44.298047   62139 cri.go:89] found id: ""
	I0416 01:01:44.298079   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.298089   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:44.298097   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:44.298158   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:44.339165   62139 cri.go:89] found id: ""
	I0416 01:01:44.339196   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.339206   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:44.339213   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:44.339280   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:44.378078   62139 cri.go:89] found id: ""
	I0416 01:01:44.378108   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.378116   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:44.378122   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:44.378170   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:44.421494   62139 cri.go:89] found id: ""
	I0416 01:01:44.421525   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.421536   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:44.421543   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:44.421609   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:44.459919   62139 cri.go:89] found id: ""
	I0416 01:01:44.459948   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.459958   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:44.459965   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:44.460025   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:44.499448   62139 cri.go:89] found id: ""
	I0416 01:01:44.499479   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.499489   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:44.499500   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:44.499516   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:44.555122   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:44.555159   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:44.572048   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:44.572075   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:44.646252   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:44.646283   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:44.646299   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:44.730593   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:44.730620   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:42.720855   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:44.723141   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:46.723452   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:42.831045   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:45.329039   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:47.331355   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:43.619897   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:45.620068   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:47.620162   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:47.276658   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:47.291354   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:47.291431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:47.334998   62139 cri.go:89] found id: ""
	I0416 01:01:47.335036   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.335055   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:47.335062   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:47.335121   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:47.376546   62139 cri.go:89] found id: ""
	I0416 01:01:47.376575   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.376582   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:47.376587   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:47.376647   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:47.418609   62139 cri.go:89] found id: ""
	I0416 01:01:47.418642   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.418654   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:47.418661   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:47.418721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:47.459432   62139 cri.go:89] found id: ""
	I0416 01:01:47.459458   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.459465   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:47.459470   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:47.459518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:47.497776   62139 cri.go:89] found id: ""
	I0416 01:01:47.497800   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.497808   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:47.497813   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:47.497866   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:47.536803   62139 cri.go:89] found id: ""
	I0416 01:01:47.536835   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.536842   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:47.536849   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:47.536916   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:47.575883   62139 cri.go:89] found id: ""
	I0416 01:01:47.575916   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.575923   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:47.575931   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:47.575976   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:47.627676   62139 cri.go:89] found id: ""
	I0416 01:01:47.627697   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.627703   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:47.627711   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:47.627725   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:47.669714   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:47.669745   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:47.721349   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:47.721389   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:47.735833   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:47.735859   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:47.806890   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:47.806913   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:47.806925   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:50.386960   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:50.400832   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:50.400901   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:50.443042   62139 cri.go:89] found id: ""
	I0416 01:01:50.443076   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.443086   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:50.443094   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:50.443157   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:50.480495   62139 cri.go:89] found id: ""
	I0416 01:01:50.480526   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.480536   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:50.480544   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:50.480602   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:50.516578   62139 cri.go:89] found id: ""
	I0416 01:01:50.516605   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.516613   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:50.516618   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:50.516676   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:50.555302   62139 cri.go:89] found id: ""
	I0416 01:01:50.555330   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.555337   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:50.555344   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:50.555388   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:50.594647   62139 cri.go:89] found id: ""
	I0416 01:01:50.594674   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.594682   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:50.594688   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:50.594737   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:50.633401   62139 cri.go:89] found id: ""
	I0416 01:01:50.633428   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.633436   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:50.633442   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:50.633501   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:50.673714   62139 cri.go:89] found id: ""
	I0416 01:01:50.673744   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.673755   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:50.673763   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:50.673811   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:50.710103   62139 cri.go:89] found id: ""
	I0416 01:01:50.710127   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.710134   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:50.710142   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:50.710153   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:50.765121   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:50.765168   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:50.780407   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:50.780436   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:50.855602   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:50.855635   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:50.855663   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:50.937249   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:50.937283   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:49.220483   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:51.724129   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:49.829742   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:52.330579   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:49.621383   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:52.120841   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:53.481261   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:53.495872   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:53.495931   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:53.532710   62139 cri.go:89] found id: ""
	I0416 01:01:53.532738   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.532748   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:53.532756   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:53.532815   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:53.568734   62139 cri.go:89] found id: ""
	I0416 01:01:53.568763   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.568770   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:53.568776   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:53.568841   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:53.608937   62139 cri.go:89] found id: ""
	I0416 01:01:53.608965   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.608976   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:53.608984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:53.609042   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:53.646538   62139 cri.go:89] found id: ""
	I0416 01:01:53.646573   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.646585   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:53.646592   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:53.646657   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:53.687761   62139 cri.go:89] found id: ""
	I0416 01:01:53.687792   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.687801   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:53.687809   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:53.687872   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:53.726126   62139 cri.go:89] found id: ""
	I0416 01:01:53.726161   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.726169   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:53.726174   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:53.726224   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:53.762583   62139 cri.go:89] found id: ""
	I0416 01:01:53.762609   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.762618   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:53.762625   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:53.762695   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:53.803685   62139 cri.go:89] found id: ""
	I0416 01:01:53.803715   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.803726   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:53.803737   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:53.803751   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:53.862215   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:53.862255   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:53.877713   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:53.877743   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:53.953394   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:53.953422   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:53.953438   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:54.044657   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:54.044698   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:56.602100   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:56.616548   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:56.616632   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:56.653765   62139 cri.go:89] found id: ""
	I0416 01:01:56.653794   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.653810   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:56.653817   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:56.653879   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:56.691394   62139 cri.go:89] found id: ""
	I0416 01:01:56.691416   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.691422   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:56.691428   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:56.691475   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:56.728995   62139 cri.go:89] found id: ""
	I0416 01:01:56.729017   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.729024   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:56.729029   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:56.729078   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:56.769119   62139 cri.go:89] found id: ""
	I0416 01:01:56.769184   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.769196   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:56.769204   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:56.769270   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:56.810562   62139 cri.go:89] found id: ""
	I0416 01:01:56.810589   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.810597   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:56.810608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:56.810669   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:56.849367   62139 cri.go:89] found id: ""
	I0416 01:01:56.849392   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.849399   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:56.849405   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:56.849464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:56.887330   62139 cri.go:89] found id: ""
	I0416 01:01:56.887359   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.887370   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:56.887378   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:56.887461   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:56.926636   62139 cri.go:89] found id: ""
	I0416 01:01:56.926664   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.926672   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:56.926682   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:56.926697   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:56.981836   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:56.981875   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:56.996385   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:56.996411   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:57.071026   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:57.071054   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:57.071070   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:54.219668   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:56.221212   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:54.829549   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:56.831452   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:54.619864   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:56.620968   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:57.155430   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:57.155466   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:59.701547   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:59.714465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:59.714526   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:59.759791   62139 cri.go:89] found id: ""
	I0416 01:01:59.759830   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.759841   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:59.759849   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:59.759914   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:59.813303   62139 cri.go:89] found id: ""
	I0416 01:01:59.813334   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.813343   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:59.813353   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:59.813406   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:59.872291   62139 cri.go:89] found id: ""
	I0416 01:01:59.872328   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.872338   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:59.872347   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:59.872423   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:59.910397   62139 cri.go:89] found id: ""
	I0416 01:01:59.910425   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.910437   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:59.910444   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:59.910512   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:59.953656   62139 cri.go:89] found id: ""
	I0416 01:01:59.953685   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.953695   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:59.953703   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:59.953779   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:59.993193   62139 cri.go:89] found id: ""
	I0416 01:01:59.993220   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.993229   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:59.993239   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:59.993298   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:00.030205   62139 cri.go:89] found id: ""
	I0416 01:02:00.030229   62139 logs.go:276] 0 containers: []
	W0416 01:02:00.030237   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:00.030242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:00.030302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:00.068160   62139 cri.go:89] found id: ""
	I0416 01:02:00.068189   62139 logs.go:276] 0 containers: []
	W0416 01:02:00.068199   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:00.068211   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:00.068226   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:00.149383   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:00.149416   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:00.188000   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:00.188025   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:00.240522   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:00.240550   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:00.254189   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:00.254215   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:00.331483   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:58.721272   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:01.220698   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:59.329440   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:01.830408   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:59.122269   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:01.619839   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:02.832656   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:02.846826   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:02.846907   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:02.883397   62139 cri.go:89] found id: ""
	I0416 01:02:02.883428   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.883439   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:02.883446   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:02.883499   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:02.923686   62139 cri.go:89] found id: ""
	I0416 01:02:02.923708   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.923715   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:02.923719   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:02.923770   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:02.964155   62139 cri.go:89] found id: ""
	I0416 01:02:02.964180   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.964188   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:02.964193   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:02.964247   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:03.005357   62139 cri.go:89] found id: ""
	I0416 01:02:03.005386   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.005396   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:03.005403   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:03.005464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:03.047221   62139 cri.go:89] found id: ""
	I0416 01:02:03.047246   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.047257   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:03.047264   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:03.047326   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:03.088737   62139 cri.go:89] found id: ""
	I0416 01:02:03.088767   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.088776   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:03.088784   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:03.088846   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:03.129756   62139 cri.go:89] found id: ""
	I0416 01:02:03.129778   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.129785   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:03.129790   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:03.129837   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:03.169422   62139 cri.go:89] found id: ""
	I0416 01:02:03.169447   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.169459   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:03.169468   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:03.169478   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:03.246485   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:03.246503   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:03.246514   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:03.326498   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:03.326533   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:03.372788   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:03.372817   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:03.428561   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:03.428603   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:05.944274   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:05.957744   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:05.957813   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:05.993348   62139 cri.go:89] found id: ""
	I0416 01:02:05.993400   62139 logs.go:276] 0 containers: []
	W0416 01:02:05.993411   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:05.993430   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:05.993497   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:06.034811   62139 cri.go:89] found id: ""
	I0416 01:02:06.034848   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.034859   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:06.034866   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:06.034953   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:06.079047   62139 cri.go:89] found id: ""
	I0416 01:02:06.079070   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.079078   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:06.079082   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:06.079127   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:06.122494   62139 cri.go:89] found id: ""
	I0416 01:02:06.122513   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.122520   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:06.122525   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:06.122589   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:06.163436   62139 cri.go:89] found id: ""
	I0416 01:02:06.163461   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.163468   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:06.163473   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:06.163534   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:06.205036   62139 cri.go:89] found id: ""
	I0416 01:02:06.205064   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.205072   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:06.205077   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:06.205134   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:06.242056   62139 cri.go:89] found id: ""
	I0416 01:02:06.242084   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.242094   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:06.242107   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:06.242166   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:06.278604   62139 cri.go:89] found id: ""
	I0416 01:02:06.278636   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.278646   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:06.278656   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:06.278671   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:06.334631   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:06.334658   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:06.348199   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:06.348227   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:06.424774   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:06.424793   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:06.424804   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:06.503509   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:06.503542   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:03.221238   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:05.721006   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:04.329267   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:06.329476   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:03.620957   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:06.121348   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:09.046665   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:09.061072   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:09.061173   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:09.097482   62139 cri.go:89] found id: ""
	I0416 01:02:09.097514   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.097524   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:09.097543   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:09.097613   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:09.135124   62139 cri.go:89] found id: ""
	I0416 01:02:09.135157   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.135168   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:09.135175   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:09.135236   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:09.173887   62139 cri.go:89] found id: ""
	I0416 01:02:09.173912   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.173920   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:09.173925   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:09.173983   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:09.209658   62139 cri.go:89] found id: ""
	I0416 01:02:09.209683   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.209691   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:09.209702   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:09.209763   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:09.249149   62139 cri.go:89] found id: ""
	I0416 01:02:09.249200   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.249209   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:09.249214   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:09.249292   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:09.291447   62139 cri.go:89] found id: ""
	I0416 01:02:09.291477   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.291487   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:09.291494   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:09.291553   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:09.329248   62139 cri.go:89] found id: ""
	I0416 01:02:09.329271   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.329281   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:09.329288   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:09.329345   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:09.365585   62139 cri.go:89] found id: ""
	I0416 01:02:09.365613   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.365622   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:09.365632   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:09.365645   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:09.418998   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:09.419031   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:09.433531   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:09.433558   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:09.508543   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:09.508573   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:09.508588   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:09.593889   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:09.593930   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:08.220704   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:10.221232   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.224680   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:08.330281   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:10.828856   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:08.619632   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:10.619780   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.621319   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.139020   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:12.154268   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:12.154349   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:12.192717   62139 cri.go:89] found id: ""
	I0416 01:02:12.192746   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.192758   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:12.192765   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:12.192832   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:12.230633   62139 cri.go:89] found id: ""
	I0416 01:02:12.230662   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.230674   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:12.230681   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:12.230729   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:12.271108   62139 cri.go:89] found id: ""
	I0416 01:02:12.271150   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.271161   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:12.271168   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:12.271233   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:12.310161   62139 cri.go:89] found id: ""
	I0416 01:02:12.310186   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.310194   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:12.310201   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:12.310272   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:12.349638   62139 cri.go:89] found id: ""
	I0416 01:02:12.349668   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.349678   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:12.349686   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:12.349766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:12.391565   62139 cri.go:89] found id: ""
	I0416 01:02:12.391597   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.391607   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:12.391620   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:12.391681   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:12.429142   62139 cri.go:89] found id: ""
	I0416 01:02:12.429186   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.429195   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:12.429200   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:12.429249   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:12.466209   62139 cri.go:89] found id: ""
	I0416 01:02:12.466238   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.466249   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:12.466260   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:12.466277   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:12.551333   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:12.551355   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:12.551367   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:12.634465   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:12.634496   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:12.675198   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:12.675231   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:12.728933   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:12.728962   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:15.243521   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:15.258589   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:15.258657   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:15.301901   62139 cri.go:89] found id: ""
	I0416 01:02:15.301931   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.301943   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:15.301951   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:15.302006   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:15.345932   62139 cri.go:89] found id: ""
	I0416 01:02:15.346011   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.346032   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:15.346043   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:15.346113   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:15.387957   62139 cri.go:89] found id: ""
	I0416 01:02:15.387983   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.387991   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:15.387996   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:15.388044   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:15.424887   62139 cri.go:89] found id: ""
	I0416 01:02:15.424916   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.424927   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:15.424934   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:15.424996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:15.460088   62139 cri.go:89] found id: ""
	I0416 01:02:15.460113   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.460120   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:15.460125   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:15.460172   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:15.495567   62139 cri.go:89] found id: ""
	I0416 01:02:15.495597   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.495607   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:15.495615   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:15.495692   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:15.533901   62139 cri.go:89] found id: ""
	I0416 01:02:15.533931   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.533940   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:15.533946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:15.533996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:15.576665   62139 cri.go:89] found id: ""
	I0416 01:02:15.576692   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.576702   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:15.576712   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:15.576728   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:15.626933   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:15.626961   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:15.681627   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:15.681656   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:15.695572   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:15.695608   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:15.768910   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:15.768934   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:15.768945   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:14.720472   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:16.722418   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.830086   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:14.830540   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:17.329838   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:15.120394   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:17.120523   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:18.349776   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:18.363499   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:18.363568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:18.404210   62139 cri.go:89] found id: ""
	I0416 01:02:18.404234   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.404241   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:18.404246   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:18.404304   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:18.444610   62139 cri.go:89] found id: ""
	I0416 01:02:18.444641   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.444651   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:18.444658   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:18.444722   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:18.483134   62139 cri.go:89] found id: ""
	I0416 01:02:18.483160   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.483168   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:18.483173   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:18.483220   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:18.522120   62139 cri.go:89] found id: ""
	I0416 01:02:18.522144   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.522156   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:18.522161   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:18.522205   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:18.566293   62139 cri.go:89] found id: ""
	I0416 01:02:18.566319   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.566327   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:18.566332   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:18.566391   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:18.604000   62139 cri.go:89] found id: ""
	I0416 01:02:18.604028   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.604036   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:18.604042   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:18.604089   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:18.641967   62139 cri.go:89] found id: ""
	I0416 01:02:18.641999   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.642009   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:18.642016   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:18.642080   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:18.683494   62139 cri.go:89] found id: ""
	I0416 01:02:18.683533   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.683544   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:18.683555   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:18.683570   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:18.761674   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:18.761699   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:18.761714   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:18.849959   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:18.849995   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:18.895534   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:18.895570   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:18.949287   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:18.949320   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:21.464393   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:21.479019   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:21.479087   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:21.516262   62139 cri.go:89] found id: ""
	I0416 01:02:21.516303   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.516313   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:21.516323   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:21.516385   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:21.554279   62139 cri.go:89] found id: ""
	I0416 01:02:21.554315   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.554327   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:21.554334   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:21.554393   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:21.590889   62139 cri.go:89] found id: ""
	I0416 01:02:21.590918   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.590928   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:21.590935   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:21.590996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:21.629925   62139 cri.go:89] found id: ""
	I0416 01:02:21.629955   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.629965   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:21.629972   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:21.630032   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:21.667947   62139 cri.go:89] found id: ""
	I0416 01:02:21.667975   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.667983   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:21.667988   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:21.668045   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:21.706275   62139 cri.go:89] found id: ""
	I0416 01:02:21.706308   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.706318   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:21.706326   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:21.706392   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:21.748077   62139 cri.go:89] found id: ""
	I0416 01:02:21.748106   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.748117   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:21.748123   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:21.748170   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:21.785441   62139 cri.go:89] found id: ""
	I0416 01:02:21.785467   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.785477   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:21.785488   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:21.785510   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:21.824702   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:21.824735   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:21.882780   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:21.882810   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:21.897211   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:21.897236   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:21.971882   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:21.971903   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:21.971915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:19.220913   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:21.721219   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:19.330086   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:21.836759   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:19.620521   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:21.621229   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:24.550749   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:24.564951   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:24.565024   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:24.605025   62139 cri.go:89] found id: ""
	I0416 01:02:24.605055   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.605063   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:24.605068   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:24.605142   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:24.640727   62139 cri.go:89] found id: ""
	I0416 01:02:24.640757   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.640764   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:24.640769   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:24.640822   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:24.678031   62139 cri.go:89] found id: ""
	I0416 01:02:24.678060   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.678068   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:24.678074   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:24.678125   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:24.714854   62139 cri.go:89] found id: ""
	I0416 01:02:24.714896   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.714907   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:24.714914   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:24.714981   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:24.752129   62139 cri.go:89] found id: ""
	I0416 01:02:24.752158   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.752168   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:24.752177   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:24.752243   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:24.788507   62139 cri.go:89] found id: ""
	I0416 01:02:24.788541   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.788551   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:24.788557   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:24.788617   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:24.828379   62139 cri.go:89] found id: ""
	I0416 01:02:24.828409   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.828419   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:24.828427   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:24.828486   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:24.865676   62139 cri.go:89] found id: ""
	I0416 01:02:24.865707   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.865717   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:24.865725   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:24.865736   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:24.941057   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:24.941079   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:24.941091   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:25.025937   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:25.025979   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:25.065828   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:25.065871   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:25.128004   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:25.128039   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:24.221435   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:26.720181   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:24.329677   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:26.329901   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:24.119781   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:26.120316   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:27.643201   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:27.658601   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:27.658660   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:27.700627   62139 cri.go:89] found id: ""
	I0416 01:02:27.700650   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.700657   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:27.700662   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:27.700718   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:27.734929   62139 cri.go:89] found id: ""
	I0416 01:02:27.734957   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.734966   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:27.734975   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:27.735046   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:27.772412   62139 cri.go:89] found id: ""
	I0416 01:02:27.772440   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.772448   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:27.772454   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:27.772514   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:27.809436   62139 cri.go:89] found id: ""
	I0416 01:02:27.809459   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.809466   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:27.809471   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:27.809518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:27.845717   62139 cri.go:89] found id: ""
	I0416 01:02:27.845746   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.845756   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:27.845764   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:27.845825   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:27.887224   62139 cri.go:89] found id: ""
	I0416 01:02:27.887250   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.887260   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:27.887267   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:27.887334   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:27.920945   62139 cri.go:89] found id: ""
	I0416 01:02:27.920974   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.920984   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:27.920992   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:27.921066   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:27.960933   62139 cri.go:89] found id: ""
	I0416 01:02:27.960959   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.960966   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:27.960974   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:27.960985   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:28.013003   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:28.013033   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:28.026599   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:28.026626   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:28.117200   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:28.117226   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:28.117240   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:28.198003   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:28.198036   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:30.741379   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:30.757102   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:30.757199   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:30.798038   62139 cri.go:89] found id: ""
	I0416 01:02:30.798068   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.798075   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:30.798080   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:30.798137   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:30.844840   62139 cri.go:89] found id: ""
	I0416 01:02:30.844862   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.844871   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:30.844877   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:30.844944   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:30.883816   62139 cri.go:89] found id: ""
	I0416 01:02:30.883841   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.883849   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:30.883855   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:30.883903   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:30.919353   62139 cri.go:89] found id: ""
	I0416 01:02:30.919380   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.919389   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:30.919396   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:30.919457   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:30.957036   62139 cri.go:89] found id: ""
	I0416 01:02:30.957061   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.957069   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:30.957084   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:30.957143   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:30.993179   62139 cri.go:89] found id: ""
	I0416 01:02:30.993211   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.993220   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:30.993228   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:30.993315   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:31.032634   62139 cri.go:89] found id: ""
	I0416 01:02:31.032661   62139 logs.go:276] 0 containers: []
	W0416 01:02:31.032670   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:31.032684   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:31.032753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:31.069345   62139 cri.go:89] found id: ""
	I0416 01:02:31.069373   62139 logs.go:276] 0 containers: []
	W0416 01:02:31.069382   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:31.069392   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:31.069408   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:31.123989   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:31.124017   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:31.140998   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:31.141032   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:31.217496   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:31.218063   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:31.218098   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:31.296811   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:31.296858   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:28.720502   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:30.720709   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:28.329978   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:30.829406   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:28.121200   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:30.620659   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:33.842516   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:33.872440   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:33.872518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:33.909287   62139 cri.go:89] found id: ""
	I0416 01:02:33.909314   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.909324   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:33.909329   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:33.909388   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:33.947531   62139 cri.go:89] found id: ""
	I0416 01:02:33.947566   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.947576   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:33.947584   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:33.947642   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:33.990084   62139 cri.go:89] found id: ""
	I0416 01:02:33.990118   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.990129   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:33.990136   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:33.990200   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:34.024121   62139 cri.go:89] found id: ""
	I0416 01:02:34.024151   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.024159   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:34.024165   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:34.024218   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:34.061075   62139 cri.go:89] found id: ""
	I0416 01:02:34.061104   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.061111   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:34.061116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:34.061179   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:34.097887   62139 cri.go:89] found id: ""
	I0416 01:02:34.097928   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.097938   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:34.097946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:34.098007   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:34.135541   62139 cri.go:89] found id: ""
	I0416 01:02:34.135567   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.135577   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:34.135585   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:34.135637   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:34.170884   62139 cri.go:89] found id: ""
	I0416 01:02:34.170910   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.170920   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:34.170931   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:34.170946   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:34.223465   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:34.223494   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:34.238898   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:34.238929   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:34.316916   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:34.316946   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:34.316962   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:34.401564   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:34.401600   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:36.945789   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:36.959707   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:36.959774   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:36.994463   62139 cri.go:89] found id: ""
	I0416 01:02:36.994497   62139 logs.go:276] 0 containers: []
	W0416 01:02:36.994508   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:36.994515   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:36.994579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:37.028847   62139 cri.go:89] found id: ""
	I0416 01:02:37.028877   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.028887   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:37.028893   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:37.028954   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:37.061841   62139 cri.go:89] found id: ""
	I0416 01:02:37.061872   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.061882   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:37.061889   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:37.061954   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:37.098460   62139 cri.go:89] found id: ""
	I0416 01:02:37.098485   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.098495   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:37.098502   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:37.098569   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:33.220794   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:35.221650   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:37.222563   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:32.829517   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:34.829762   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:36.831773   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:33.121842   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:35.620647   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:37.620795   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:37.133016   62139 cri.go:89] found id: ""
	I0416 01:02:37.133044   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.133053   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:37.133059   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:37.133122   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:37.170252   62139 cri.go:89] found id: ""
	I0416 01:02:37.170276   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.170286   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:37.170293   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:37.170354   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:37.206114   62139 cri.go:89] found id: ""
	I0416 01:02:37.206141   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.206148   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:37.206153   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:37.206208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:37.241353   62139 cri.go:89] found id: ""
	I0416 01:02:37.241383   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.241395   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:37.241405   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:37.241429   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:37.293452   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:37.293483   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:37.309885   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:37.309926   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:37.385455   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:37.385481   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:37.385496   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:37.463064   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:37.463101   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:40.008717   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:40.022249   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:40.022327   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:40.064444   62139 cri.go:89] found id: ""
	I0416 01:02:40.064479   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.064490   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:40.064497   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:40.064545   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:40.100326   62139 cri.go:89] found id: ""
	I0416 01:02:40.100353   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.100361   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:40.100366   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:40.100413   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:40.138818   62139 cri.go:89] found id: ""
	I0416 01:02:40.138857   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.138869   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:40.138878   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:40.138928   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:40.184203   62139 cri.go:89] found id: ""
	I0416 01:02:40.184234   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.184244   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:40.184252   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:40.184311   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:40.221968   62139 cri.go:89] found id: ""
	I0416 01:02:40.221991   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.221998   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:40.222007   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:40.222088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:40.265621   62139 cri.go:89] found id: ""
	I0416 01:02:40.265643   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.265650   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:40.265657   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:40.265723   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:40.314121   62139 cri.go:89] found id: ""
	I0416 01:02:40.314152   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.314163   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:40.314170   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:40.314229   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:40.359788   62139 cri.go:89] found id: ""
	I0416 01:02:40.359825   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.359836   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:40.359849   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:40.359863   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:40.431678   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:40.431718   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:40.449847   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:40.449877   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:40.524271   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:40.524297   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:40.524309   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:40.601398   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:40.601433   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:39.720606   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:41.721437   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:39.330974   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:41.830050   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:40.120785   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:42.123996   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:43.145431   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:43.160269   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:43.160338   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:43.196603   62139 cri.go:89] found id: ""
	I0416 01:02:43.196637   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.196648   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:43.196655   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:43.196716   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:43.235863   62139 cri.go:89] found id: ""
	I0416 01:02:43.235893   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.235905   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:43.235911   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:43.235971   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:43.271408   62139 cri.go:89] found id: ""
	I0416 01:02:43.271437   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.271444   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:43.271450   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:43.271512   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:43.310931   62139 cri.go:89] found id: ""
	I0416 01:02:43.310958   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.310965   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:43.310971   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:43.311032   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:43.347472   62139 cri.go:89] found id: ""
	I0416 01:02:43.347502   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.347512   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:43.347520   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:43.347581   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:43.387326   62139 cri.go:89] found id: ""
	I0416 01:02:43.387361   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.387372   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:43.387429   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:43.387506   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:43.425099   62139 cri.go:89] found id: ""
	I0416 01:02:43.425122   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.425130   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:43.425141   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:43.425208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:43.461364   62139 cri.go:89] found id: ""
	I0416 01:02:43.461397   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.461408   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:43.461419   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:43.461434   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:43.514520   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:43.514556   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:43.528740   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:43.528777   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:43.599010   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:43.599035   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:43.599051   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:43.682913   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:43.682959   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:46.231398   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:46.260247   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:46.260338   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:46.304498   62139 cri.go:89] found id: ""
	I0416 01:02:46.304521   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.304528   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:46.304534   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:46.304600   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:46.364055   62139 cri.go:89] found id: ""
	I0416 01:02:46.364081   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.364090   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:46.364098   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:46.364167   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:46.412395   62139 cri.go:89] found id: ""
	I0416 01:02:46.412437   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.412475   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:46.412510   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:46.412584   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:46.453669   62139 cri.go:89] found id: ""
	I0416 01:02:46.453698   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.453709   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:46.453716   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:46.453766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:46.490667   62139 cri.go:89] found id: ""
	I0416 01:02:46.490699   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.490709   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:46.490715   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:46.490766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:46.529405   62139 cri.go:89] found id: ""
	I0416 01:02:46.529443   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.529460   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:46.529467   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:46.529527   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:46.565359   62139 cri.go:89] found id: ""
	I0416 01:02:46.565384   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.565391   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:46.565396   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:46.565451   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:46.609381   62139 cri.go:89] found id: ""
	I0416 01:02:46.609406   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.609413   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:46.609421   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:46.609432   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:46.663080   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:46.663112   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:46.677303   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:46.677338   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:46.750134   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:46.750163   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:46.750175   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:46.829395   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:46.829434   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:43.721477   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:46.220462   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:43.831829   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:46.329333   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:44.619712   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:46.621271   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:49.374356   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:49.390674   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:49.390753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:49.427968   62139 cri.go:89] found id: ""
	I0416 01:02:49.427993   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.428000   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:49.428005   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:49.428058   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:49.461821   62139 cri.go:89] found id: ""
	I0416 01:02:49.461850   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.461857   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:49.461863   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:49.461918   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:49.496305   62139 cri.go:89] found id: ""
	I0416 01:02:49.496356   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.496364   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:49.496369   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:49.496429   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:49.536096   62139 cri.go:89] found id: ""
	I0416 01:02:49.536122   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.536129   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:49.536134   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:49.536194   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:49.572078   62139 cri.go:89] found id: ""
	I0416 01:02:49.572106   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.572115   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:49.572122   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:49.572181   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:49.607803   62139 cri.go:89] found id: ""
	I0416 01:02:49.607835   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.607847   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:49.607861   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:49.607915   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:49.651245   62139 cri.go:89] found id: ""
	I0416 01:02:49.651272   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.651280   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:49.651285   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:49.651332   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:49.693587   62139 cri.go:89] found id: ""
	I0416 01:02:49.693612   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.693622   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:49.693632   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:49.693646   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:49.750003   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:49.750032   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:49.764447   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:49.764472   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:49.844739   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:49.844764   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:49.844780   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:49.924260   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:49.924294   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:48.220753   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:50.220986   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:48.330946   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:50.829409   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:49.120516   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:51.619516   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:52.467399   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:52.481656   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:52.481729   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:52.518506   62139 cri.go:89] found id: ""
	I0416 01:02:52.518531   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.518537   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:52.518544   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:52.518599   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:52.554799   62139 cri.go:89] found id: ""
	I0416 01:02:52.554820   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.554827   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:52.554832   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:52.554888   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:52.597236   62139 cri.go:89] found id: ""
	I0416 01:02:52.597265   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.597272   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:52.597278   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:52.597335   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:52.635544   62139 cri.go:89] found id: ""
	I0416 01:02:52.635567   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.635578   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:52.635585   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:52.635639   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:52.672715   62139 cri.go:89] found id: ""
	I0416 01:02:52.672739   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.672746   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:52.672751   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:52.672808   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:52.711600   62139 cri.go:89] found id: ""
	I0416 01:02:52.711631   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.711640   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:52.711648   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:52.711718   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:52.750372   62139 cri.go:89] found id: ""
	I0416 01:02:52.750405   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.750416   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:52.750423   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:52.750486   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:52.786651   62139 cri.go:89] found id: ""
	I0416 01:02:52.786678   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.786688   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:52.786698   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:52.786712   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:52.840262   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:52.840296   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:52.854734   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:52.854762   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:52.931182   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:52.931211   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:52.931226   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:53.007023   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:53.007061   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:55.548305   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:55.562483   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:55.562562   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:55.599480   62139 cri.go:89] found id: ""
	I0416 01:02:55.599504   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.599511   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:55.599517   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:55.599573   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:55.636832   62139 cri.go:89] found id: ""
	I0416 01:02:55.636862   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.636873   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:55.636879   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:55.636940   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:55.676211   62139 cri.go:89] found id: ""
	I0416 01:02:55.676240   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.676250   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:55.676256   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:55.676318   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:55.713498   62139 cri.go:89] found id: ""
	I0416 01:02:55.713527   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.713537   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:55.713544   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:55.713604   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:55.754239   62139 cri.go:89] found id: ""
	I0416 01:02:55.754276   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.754284   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:55.754301   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:55.754355   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:55.792073   62139 cri.go:89] found id: ""
	I0416 01:02:55.792106   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.792117   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:55.792125   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:55.792191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:55.829635   62139 cri.go:89] found id: ""
	I0416 01:02:55.829665   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.829676   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:55.829683   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:55.829742   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:55.876417   62139 cri.go:89] found id: ""
	I0416 01:02:55.876443   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.876450   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:55.876458   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:55.876471   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:55.926670   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:55.926707   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:55.941660   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:55.941696   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:56.018776   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:56.018806   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:56.018820   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:56.097335   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:56.097378   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:52.720703   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:55.221614   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:52.830970   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:55.329886   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:53.620969   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:56.122135   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:58.642188   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:58.655537   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:58.655605   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:58.692091   62139 cri.go:89] found id: ""
	I0416 01:02:58.692116   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.692124   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:58.692129   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:58.692191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:58.729434   62139 cri.go:89] found id: ""
	I0416 01:02:58.729461   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.729472   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:58.729491   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:58.729568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:58.765879   62139 cri.go:89] found id: ""
	I0416 01:02:58.765907   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.765916   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:58.765924   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:58.765987   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:58.802285   62139 cri.go:89] found id: ""
	I0416 01:02:58.802323   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.802334   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:58.802342   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:58.802399   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:58.841357   62139 cri.go:89] found id: ""
	I0416 01:02:58.841385   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.841396   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:58.841403   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:58.841464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:58.876982   62139 cri.go:89] found id: ""
	I0416 01:02:58.877022   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.877032   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:58.877040   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:58.877108   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:58.915563   62139 cri.go:89] found id: ""
	I0416 01:02:58.915596   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.915607   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:58.915614   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:58.915683   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:58.951268   62139 cri.go:89] found id: ""
	I0416 01:02:58.951303   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.951313   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:58.951324   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:58.951341   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:59.004673   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:59.004710   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:59.019393   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:59.019423   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:59.091587   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:59.091612   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:59.091632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:59.169623   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:59.169655   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:01.710597   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:01.724394   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:01.724463   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:01.761577   62139 cri.go:89] found id: ""
	I0416 01:03:01.761605   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.761616   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:01.761624   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:01.761684   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:01.797467   62139 cri.go:89] found id: ""
	I0416 01:03:01.797498   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.797508   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:01.797515   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:01.797582   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:01.839910   62139 cri.go:89] found id: ""
	I0416 01:03:01.839940   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.839950   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:01.839958   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:01.840019   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:01.879572   62139 cri.go:89] found id: ""
	I0416 01:03:01.879599   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.879611   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:01.879617   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:01.879664   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:01.920190   62139 cri.go:89] found id: ""
	I0416 01:03:01.920222   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.920234   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:01.920242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:01.920300   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:01.957389   62139 cri.go:89] found id: ""
	I0416 01:03:01.957418   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.957428   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:01.957436   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:01.957507   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:01.998730   62139 cri.go:89] found id: ""
	I0416 01:03:01.998754   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.998762   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:01.998767   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:01.998812   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:02.036062   62139 cri.go:89] found id: ""
	I0416 01:03:02.036094   62139 logs.go:276] 0 containers: []
	W0416 01:03:02.036103   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:02.036112   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:02.036125   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:02.089109   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:02.089149   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:57.720792   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:00.219899   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:02.220048   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:57.832016   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:00.328867   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:02.330238   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:58.620416   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:01.121496   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:02.103312   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:02.103342   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:02.174034   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:02.174056   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:02.174069   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:02.249526   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:02.249555   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:04.795314   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:04.808294   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:04.808367   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:04.848795   62139 cri.go:89] found id: ""
	I0416 01:03:04.848825   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.848849   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:04.848857   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:04.848928   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:04.886442   62139 cri.go:89] found id: ""
	I0416 01:03:04.886477   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.886488   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:04.886502   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:04.886572   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:04.929183   62139 cri.go:89] found id: ""
	I0416 01:03:04.929215   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.929226   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:04.929234   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:04.929297   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:04.965134   62139 cri.go:89] found id: ""
	I0416 01:03:04.965172   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.965184   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:04.965191   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:04.965247   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:05.001346   62139 cri.go:89] found id: ""
	I0416 01:03:05.001373   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.001381   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:05.001387   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:05.001434   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:05.039181   62139 cri.go:89] found id: ""
	I0416 01:03:05.039210   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.039219   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:05.039224   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:05.039289   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:05.073451   62139 cri.go:89] found id: ""
	I0416 01:03:05.073479   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.073487   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:05.073494   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:05.073555   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:05.108466   62139 cri.go:89] found id: ""
	I0416 01:03:05.108495   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.108510   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:05.108520   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:05.108537   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:05.162725   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:05.162765   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:05.178152   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:05.178183   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:05.255122   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:05.255147   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:05.255161   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:05.331274   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:05.331309   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:04.220320   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:06.220475   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:04.331381   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:06.830143   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:03.620275   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:06.121293   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:07.882980   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:07.896311   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:07.896372   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:07.934632   62139 cri.go:89] found id: ""
	I0416 01:03:07.934661   62139 logs.go:276] 0 containers: []
	W0416 01:03:07.934671   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:07.934677   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:07.934745   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:07.971463   62139 cri.go:89] found id: ""
	I0416 01:03:07.971495   62139 logs.go:276] 0 containers: []
	W0416 01:03:07.971511   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:07.971518   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:07.971581   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:08.006808   62139 cri.go:89] found id: ""
	I0416 01:03:08.006839   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.006847   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:08.006852   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:08.006912   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:08.043051   62139 cri.go:89] found id: ""
	I0416 01:03:08.043082   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.043089   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:08.043095   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:08.043155   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:08.078602   62139 cri.go:89] found id: ""
	I0416 01:03:08.078638   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.078647   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:08.078655   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:08.078724   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:08.115264   62139 cri.go:89] found id: ""
	I0416 01:03:08.115293   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.115303   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:08.115311   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:08.115378   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:08.152782   62139 cri.go:89] found id: ""
	I0416 01:03:08.152814   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.152821   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:08.152826   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:08.152875   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:08.193484   62139 cri.go:89] found id: ""
	I0416 01:03:08.193506   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.193513   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:08.193522   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:08.193532   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:08.248796   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:08.248831   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:08.266054   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:08.266083   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:08.343470   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:08.343501   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:08.343515   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:08.430335   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:08.430383   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:10.972540   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:10.986911   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:10.986984   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:11.024905   62139 cri.go:89] found id: ""
	I0416 01:03:11.024939   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.024951   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:11.024958   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:11.025011   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:11.058629   62139 cri.go:89] found id: ""
	I0416 01:03:11.058654   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.058662   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:11.058667   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:11.058721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:11.093277   62139 cri.go:89] found id: ""
	I0416 01:03:11.093308   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.093317   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:11.093325   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:11.093386   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:11.131883   62139 cri.go:89] found id: ""
	I0416 01:03:11.131912   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.131924   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:11.131934   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:11.132004   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:11.175142   62139 cri.go:89] found id: ""
	I0416 01:03:11.175169   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.175179   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:11.175186   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:11.175236   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:11.209985   62139 cri.go:89] found id: ""
	I0416 01:03:11.210020   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.210031   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:11.210039   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:11.210110   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:11.246086   62139 cri.go:89] found id: ""
	I0416 01:03:11.246119   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.246129   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:11.246137   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:11.246199   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:11.286979   62139 cri.go:89] found id: ""
	I0416 01:03:11.287007   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.287019   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:11.287037   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:11.287051   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:11.364522   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:11.364557   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:11.410343   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:11.410375   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:11.459671   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:11.459703   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:11.476163   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:11.476193   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:11.549544   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:08.220881   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:10.720607   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:09.329882   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:11.330570   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:08.620817   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:11.120789   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:14.050433   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:14.065375   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:14.065431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:14.105548   62139 cri.go:89] found id: ""
	I0416 01:03:14.105571   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.105579   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:14.105583   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:14.105644   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:14.146891   62139 cri.go:89] found id: ""
	I0416 01:03:14.146915   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.146922   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:14.146927   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:14.146972   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:14.183905   62139 cri.go:89] found id: ""
	I0416 01:03:14.183937   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.183948   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:14.183954   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:14.184002   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:14.219878   62139 cri.go:89] found id: ""
	I0416 01:03:14.219905   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.219915   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:14.219922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:14.219978   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:14.256284   62139 cri.go:89] found id: ""
	I0416 01:03:14.256310   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.256317   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:14.256323   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:14.256381   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:14.295932   62139 cri.go:89] found id: ""
	I0416 01:03:14.295958   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.295966   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:14.295971   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:14.296025   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:14.333202   62139 cri.go:89] found id: ""
	I0416 01:03:14.333226   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.333235   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:14.333242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:14.333302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:14.370034   62139 cri.go:89] found id: ""
	I0416 01:03:14.370059   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.370066   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:14.370074   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:14.370092   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:14.424626   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:14.424669   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:14.441842   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:14.441872   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:14.515899   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:14.515926   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:14.515944   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:14.599956   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:14.599991   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:12.720896   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:15.220260   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:13.829944   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:16.328971   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:13.621084   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:16.120767   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:17.157610   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:17.171737   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:17.171800   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:17.214327   62139 cri.go:89] found id: ""
	I0416 01:03:17.214354   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.214364   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:17.214371   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:17.214433   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:17.255896   62139 cri.go:89] found id: ""
	I0416 01:03:17.255924   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.255939   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:17.255946   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:17.256005   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:17.298470   62139 cri.go:89] found id: ""
	I0416 01:03:17.298498   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.298512   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:17.298520   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:17.298580   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:17.338810   62139 cri.go:89] found id: ""
	I0416 01:03:17.338834   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.338842   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:17.338847   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:17.338899   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:17.375980   62139 cri.go:89] found id: ""
	I0416 01:03:17.376012   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.376019   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:17.376024   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:17.376076   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:17.411374   62139 cri.go:89] found id: ""
	I0416 01:03:17.411400   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.411408   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:17.411413   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:17.411463   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:17.452916   62139 cri.go:89] found id: ""
	I0416 01:03:17.452951   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.452962   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:17.452969   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:17.453037   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:17.492459   62139 cri.go:89] found id: ""
	I0416 01:03:17.492489   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.492500   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:17.492512   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:17.492527   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:17.541780   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:17.541814   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:17.558831   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:17.558867   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:17.635332   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:17.635351   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:17.635362   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:17.715778   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:17.715809   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:20.260621   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:20.274721   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:20.274791   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:20.311965   62139 cri.go:89] found id: ""
	I0416 01:03:20.311991   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.312002   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:20.312009   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:20.312069   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:20.350316   62139 cri.go:89] found id: ""
	I0416 01:03:20.350346   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.350356   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:20.350363   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:20.350414   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:20.404666   62139 cri.go:89] found id: ""
	I0416 01:03:20.404692   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.404700   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:20.404705   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:20.404753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:20.441223   62139 cri.go:89] found id: ""
	I0416 01:03:20.441254   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.441267   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:20.441275   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:20.441340   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:20.480535   62139 cri.go:89] found id: ""
	I0416 01:03:20.480596   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.480606   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:20.480613   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:20.480680   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:20.517520   62139 cri.go:89] found id: ""
	I0416 01:03:20.517543   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.517550   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:20.517556   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:20.517614   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:20.556067   62139 cri.go:89] found id: ""
	I0416 01:03:20.556097   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.556107   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:20.556114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:20.556177   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:20.594901   62139 cri.go:89] found id: ""
	I0416 01:03:20.594932   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.594939   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:20.594947   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:20.594958   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:20.673759   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:20.673795   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:20.721407   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:20.721443   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:20.772957   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:20.772989   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:20.787902   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:20.787932   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:20.863445   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:17.721415   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:20.221042   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:18.329421   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:20.329949   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:22.330009   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:18.122678   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:20.621127   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:22.621692   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:23.363637   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:23.377916   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:23.377991   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:23.415642   62139 cri.go:89] found id: ""
	I0416 01:03:23.415671   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.415679   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:23.415685   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:23.415732   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:23.452788   62139 cri.go:89] found id: ""
	I0416 01:03:23.452812   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.452819   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:23.452829   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:23.452878   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:23.488758   62139 cri.go:89] found id: ""
	I0416 01:03:23.488785   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.488794   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:23.488801   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:23.488862   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:23.526542   62139 cri.go:89] found id: ""
	I0416 01:03:23.526574   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.526584   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:23.526592   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:23.526661   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:23.562481   62139 cri.go:89] found id: ""
	I0416 01:03:23.562505   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.562512   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:23.562518   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:23.562579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:23.599119   62139 cri.go:89] found id: ""
	I0416 01:03:23.599145   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.599155   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:23.599162   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:23.599241   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:23.642445   62139 cri.go:89] found id: ""
	I0416 01:03:23.642474   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.642485   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:23.642492   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:23.642557   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:23.678091   62139 cri.go:89] found id: ""
	I0416 01:03:23.678113   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.678121   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:23.678129   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:23.678140   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:23.731668   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:23.731703   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:23.746413   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:23.746444   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:23.821885   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:23.821908   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:23.821923   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:23.901836   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:23.901872   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:26.444935   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:26.459240   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:26.459308   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:26.499208   62139 cri.go:89] found id: ""
	I0416 01:03:26.499237   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.499249   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:26.499256   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:26.499318   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:26.536220   62139 cri.go:89] found id: ""
	I0416 01:03:26.536258   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.536270   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:26.536277   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:26.536342   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:26.576217   62139 cri.go:89] found id: ""
	I0416 01:03:26.576241   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.576249   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:26.576254   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:26.576314   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:26.612343   62139 cri.go:89] found id: ""
	I0416 01:03:26.612369   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.612378   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:26.612385   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:26.612448   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:26.651323   62139 cri.go:89] found id: ""
	I0416 01:03:26.651353   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.651365   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:26.651384   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:26.651453   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:26.688844   62139 cri.go:89] found id: ""
	I0416 01:03:26.688874   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.688885   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:26.688891   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:26.688969   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:26.724362   62139 cri.go:89] found id: ""
	I0416 01:03:26.724387   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.724395   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:26.724401   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:26.724455   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:26.767766   62139 cri.go:89] found id: ""
	I0416 01:03:26.767795   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.767806   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:26.767816   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:26.767837   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:26.788269   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:26.788297   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:26.884802   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:26.884822   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:26.884834   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:26.964007   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:26.964044   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:27.003719   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:27.003745   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:22.720420   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:24.720865   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:26.721369   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:24.828766   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:26.830222   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:25.119674   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:27.620689   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:29.563218   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:29.579014   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:29.579078   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:29.620739   62139 cri.go:89] found id: ""
	I0416 01:03:29.620769   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.620780   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:29.620787   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:29.620850   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:29.658165   62139 cri.go:89] found id: ""
	I0416 01:03:29.658192   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.658199   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:29.658205   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:29.658252   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:29.693893   62139 cri.go:89] found id: ""
	I0416 01:03:29.693921   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.693929   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:29.693935   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:29.693985   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:29.737808   62139 cri.go:89] found id: ""
	I0416 01:03:29.737836   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.737846   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:29.737851   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:29.737910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:29.777382   62139 cri.go:89] found id: ""
	I0416 01:03:29.777408   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.777416   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:29.777422   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:29.777473   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:29.815633   62139 cri.go:89] found id: ""
	I0416 01:03:29.815659   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.815668   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:29.815682   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:29.815743   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:29.858790   62139 cri.go:89] found id: ""
	I0416 01:03:29.858820   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.858831   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:29.858839   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:29.858899   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:29.897085   62139 cri.go:89] found id: ""
	I0416 01:03:29.897120   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.897131   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:29.897142   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:29.897169   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:29.951231   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:29.951266   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:29.965539   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:29.965565   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:30.045138   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:30.045170   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:30.045186   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:30.120575   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:30.120606   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:29.220073   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:31.221145   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:29.328625   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:31.329903   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:29.621401   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:32.120604   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:32.662210   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:32.675833   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:32.675903   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:32.712104   62139 cri.go:89] found id: ""
	I0416 01:03:32.712129   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.712136   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:32.712141   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:32.712198   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:32.749617   62139 cri.go:89] found id: ""
	I0416 01:03:32.749644   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.749652   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:32.749658   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:32.749723   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:32.785069   62139 cri.go:89] found id: ""
	I0416 01:03:32.785100   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.785110   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:32.785116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:32.785191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:32.825871   62139 cri.go:89] found id: ""
	I0416 01:03:32.825912   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.825922   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:32.825928   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:32.826008   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:32.868294   62139 cri.go:89] found id: ""
	I0416 01:03:32.868321   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.868328   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:32.868334   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:32.868401   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:32.907764   62139 cri.go:89] found id: ""
	I0416 01:03:32.907789   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.907796   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:32.907802   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:32.907870   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:32.946112   62139 cri.go:89] found id: ""
	I0416 01:03:32.946137   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.946144   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:32.946155   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:32.946215   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:32.985343   62139 cri.go:89] found id: ""
	I0416 01:03:32.985374   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.985385   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:32.985395   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:32.985415   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:33.063117   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:33.063154   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:33.113739   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:33.113773   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:33.163466   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:33.163508   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:33.178368   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:33.178397   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:33.259509   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:35.760004   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:35.774161   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:35.774237   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:35.812551   62139 cri.go:89] found id: ""
	I0416 01:03:35.812580   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.812589   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:35.812594   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:35.812642   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:35.853134   62139 cri.go:89] found id: ""
	I0416 01:03:35.853177   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.853187   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:35.853195   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:35.853255   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:35.894210   62139 cri.go:89] found id: ""
	I0416 01:03:35.894246   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.894254   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:35.894259   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:35.894330   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:35.928986   62139 cri.go:89] found id: ""
	I0416 01:03:35.929010   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.929019   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:35.929027   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:35.929090   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:35.970688   62139 cri.go:89] found id: ""
	I0416 01:03:35.970712   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.970719   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:35.970725   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:35.970783   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:36.005744   62139 cri.go:89] found id: ""
	I0416 01:03:36.005771   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.005778   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:36.005783   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:36.005829   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:36.044932   62139 cri.go:89] found id: ""
	I0416 01:03:36.044966   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.044977   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:36.044984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:36.045051   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:36.080488   62139 cri.go:89] found id: ""
	I0416 01:03:36.080516   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.080527   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:36.080538   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:36.080552   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:36.132956   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:36.133000   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:36.147070   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:36.147097   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:36.226640   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:36.226670   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:36.226684   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:36.307205   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:36.307249   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:33.221952   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:35.720745   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:33.828768   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:35.830452   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:34.120695   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:36.619511   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:38.849685   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:38.863817   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:38.863897   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:38.902418   62139 cri.go:89] found id: ""
	I0416 01:03:38.902445   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.902455   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:38.902462   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:38.902533   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:38.937811   62139 cri.go:89] found id: ""
	I0416 01:03:38.937838   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.937845   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:38.937850   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:38.937900   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:38.972380   62139 cri.go:89] found id: ""
	I0416 01:03:38.972403   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.972411   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:38.972416   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:38.972466   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:39.007572   62139 cri.go:89] found id: ""
	I0416 01:03:39.007595   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.007603   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:39.007608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:39.007651   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:39.049355   62139 cri.go:89] found id: ""
	I0416 01:03:39.049382   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.049391   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:39.049398   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:39.049459   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:39.084535   62139 cri.go:89] found id: ""
	I0416 01:03:39.084565   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.084574   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:39.084581   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:39.084645   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:39.125027   62139 cri.go:89] found id: ""
	I0416 01:03:39.125055   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.125073   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:39.125080   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:39.125136   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:39.164506   62139 cri.go:89] found id: ""
	I0416 01:03:39.164537   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.164547   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:39.164557   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:39.164573   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:39.203447   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:39.203483   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:39.259087   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:39.259122   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:39.273611   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:39.273637   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:39.352372   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:39.352392   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:39.352407   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:41.938575   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:41.952937   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:41.953019   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:41.990771   62139 cri.go:89] found id: ""
	I0416 01:03:41.990802   62139 logs.go:276] 0 containers: []
	W0416 01:03:41.990811   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:41.990819   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:41.990881   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:42.027338   62139 cri.go:89] found id: ""
	I0416 01:03:42.027367   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.027374   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:42.027379   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:42.027431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:42.068348   62139 cri.go:89] found id: ""
	I0416 01:03:42.068377   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.068387   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:42.068394   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:42.068457   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:38.220198   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:40.220481   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:42.221383   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:38.330729   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:40.831615   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:38.620021   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:40.620641   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:42.620702   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:42.108157   62139 cri.go:89] found id: ""
	I0416 01:03:42.108181   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.108187   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:42.108193   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:42.108244   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:42.149749   62139 cri.go:89] found id: ""
	I0416 01:03:42.149770   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.149777   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:42.149784   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:42.149848   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:42.185322   62139 cri.go:89] found id: ""
	I0416 01:03:42.185349   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.185360   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:42.185368   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:42.185435   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:42.224334   62139 cri.go:89] found id: ""
	I0416 01:03:42.224359   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.224370   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:42.224376   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:42.224435   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:42.263466   62139 cri.go:89] found id: ""
	I0416 01:03:42.263494   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.263502   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:42.263509   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:42.263522   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:42.315106   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:42.315139   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:42.329394   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:42.329425   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:42.405267   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:42.405305   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:42.405321   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:42.486126   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:42.486168   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:45.027718   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:45.042387   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:45.042453   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:45.080790   62139 cri.go:89] found id: ""
	I0416 01:03:45.080814   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.080823   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:45.080829   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:45.080875   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:45.121278   62139 cri.go:89] found id: ""
	I0416 01:03:45.121306   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.121317   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:45.121324   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:45.121383   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:45.158076   62139 cri.go:89] found id: ""
	I0416 01:03:45.158099   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.158107   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:45.158116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:45.158162   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:45.195577   62139 cri.go:89] found id: ""
	I0416 01:03:45.195608   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.195619   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:45.195627   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:45.195685   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:45.239230   62139 cri.go:89] found id: ""
	I0416 01:03:45.239257   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.239267   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:45.239275   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:45.239326   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:45.279193   62139 cri.go:89] found id: ""
	I0416 01:03:45.279220   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.279227   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:45.279232   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:45.279280   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:45.314876   62139 cri.go:89] found id: ""
	I0416 01:03:45.314908   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.314916   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:45.314922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:45.314970   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:45.351699   62139 cri.go:89] found id: ""
	I0416 01:03:45.351723   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.351730   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:45.351738   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:45.351750   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:45.392681   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:45.392708   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:45.446564   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:45.446605   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:45.460541   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:45.460564   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:45.535287   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:45.535319   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:45.535334   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:44.720088   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:46.721511   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:43.329413   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:45.330644   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:45.123357   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:47.621806   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:48.117476   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:48.133341   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:48.133402   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:48.171230   62139 cri.go:89] found id: ""
	I0416 01:03:48.171263   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.171273   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:48.171280   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:48.171337   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:48.206188   62139 cri.go:89] found id: ""
	I0416 01:03:48.206218   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.206229   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:48.206236   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:48.206294   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:48.242349   62139 cri.go:89] found id: ""
	I0416 01:03:48.242377   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.242384   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:48.242389   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:48.242437   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:48.278324   62139 cri.go:89] found id: ""
	I0416 01:03:48.278347   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.278355   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:48.278360   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:48.278406   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:48.315727   62139 cri.go:89] found id: ""
	I0416 01:03:48.315753   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.315763   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:48.315770   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:48.315828   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:48.354146   62139 cri.go:89] found id: ""
	I0416 01:03:48.354169   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.354176   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:48.354182   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:48.354242   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:48.393951   62139 cri.go:89] found id: ""
	I0416 01:03:48.393989   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.394000   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:48.394007   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:48.394081   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:48.431849   62139 cri.go:89] found id: ""
	I0416 01:03:48.431887   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.431895   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:48.431903   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:48.431917   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:48.446210   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:48.446242   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:48.517459   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:48.517485   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:48.517500   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:48.596320   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:48.596356   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:48.639700   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:48.639733   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:51.197396   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:51.211803   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:51.211889   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:51.250768   62139 cri.go:89] found id: ""
	I0416 01:03:51.250793   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.250802   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:51.250810   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:51.250872   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:51.291389   62139 cri.go:89] found id: ""
	I0416 01:03:51.291415   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.291421   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:51.291429   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:51.291478   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:51.332466   62139 cri.go:89] found id: ""
	I0416 01:03:51.332490   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.332499   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:51.332504   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:51.332549   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:51.367731   62139 cri.go:89] found id: ""
	I0416 01:03:51.367759   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.367767   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:51.367773   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:51.367829   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:51.400567   62139 cri.go:89] found id: ""
	I0416 01:03:51.400599   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.400609   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:51.400616   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:51.400679   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:51.433561   62139 cri.go:89] found id: ""
	I0416 01:03:51.433590   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.433598   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:51.433608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:51.433666   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:51.469136   62139 cri.go:89] found id: ""
	I0416 01:03:51.469179   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.469189   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:51.469196   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:51.469255   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:51.504410   62139 cri.go:89] found id: ""
	I0416 01:03:51.504442   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.504452   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:51.504462   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:51.504480   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:51.557420   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:51.557449   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:51.571481   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:51.571506   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:51.648722   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:51.648744   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:51.648755   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:51.728945   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:51.728978   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:49.221614   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:51.721798   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:47.829985   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:50.329419   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:52.329909   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:49.622776   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:52.120080   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:54.272503   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:54.286573   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:54.286646   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:54.321084   62139 cri.go:89] found id: ""
	I0416 01:03:54.321115   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.321125   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:54.321133   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:54.321208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:54.366333   62139 cri.go:89] found id: ""
	I0416 01:03:54.366364   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.366374   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:54.366380   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:54.366437   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:54.406267   62139 cri.go:89] found id: ""
	I0416 01:03:54.406317   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.406328   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:54.406336   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:54.406405   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:54.446853   62139 cri.go:89] found id: ""
	I0416 01:03:54.446883   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.446894   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:54.446901   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:54.446956   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:54.487658   62139 cri.go:89] found id: ""
	I0416 01:03:54.487683   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.487690   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:54.487696   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:54.487753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:54.530189   62139 cri.go:89] found id: ""
	I0416 01:03:54.530216   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.530226   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:54.530232   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:54.530289   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:54.571317   62139 cri.go:89] found id: ""
	I0416 01:03:54.571341   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.571349   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:54.571354   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:54.571416   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:54.612432   62139 cri.go:89] found id: ""
	I0416 01:03:54.612458   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.612467   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:54.612478   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:54.612493   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:54.666599   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:54.666629   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:54.680880   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:54.680915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:54.757365   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:54.757386   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:54.757398   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:54.834436   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:54.834468   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:54.219690   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:56.220753   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:54.332950   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:56.830167   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:54.621002   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:56.622452   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:57.405516   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:57.420694   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:57.420773   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:57.460338   62139 cri.go:89] found id: ""
	I0416 01:03:57.460367   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.460374   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:57.460381   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:57.460442   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:57.498121   62139 cri.go:89] found id: ""
	I0416 01:03:57.498150   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.498160   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:57.498167   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:57.498228   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:57.536959   62139 cri.go:89] found id: ""
	I0416 01:03:57.536989   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.537005   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:57.537014   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:57.537077   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:57.575633   62139 cri.go:89] found id: ""
	I0416 01:03:57.575662   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.575673   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:57.575680   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:57.575743   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:57.614459   62139 cri.go:89] found id: ""
	I0416 01:03:57.614491   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.614501   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:57.614509   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:57.614568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:57.657078   62139 cri.go:89] found id: ""
	I0416 01:03:57.657109   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.657120   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:57.657127   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:57.657204   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:57.693882   62139 cri.go:89] found id: ""
	I0416 01:03:57.693904   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.693911   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:57.693922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:57.693969   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:57.731283   62139 cri.go:89] found id: ""
	I0416 01:03:57.731312   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.731320   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:57.731327   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:57.731338   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:57.782618   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:57.782656   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:57.796763   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:57.796794   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:57.869629   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:57.869652   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:57.869665   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:57.948859   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:57.948892   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:00.487682   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:00.501095   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:00.501182   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:00.537902   62139 cri.go:89] found id: ""
	I0416 01:04:00.537931   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.537939   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:00.537945   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:00.537994   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:00.574164   62139 cri.go:89] found id: ""
	I0416 01:04:00.574203   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.574214   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:00.574222   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:00.574287   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:00.629592   62139 cri.go:89] found id: ""
	I0416 01:04:00.629615   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.629622   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:00.629627   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:00.629679   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:00.672102   62139 cri.go:89] found id: ""
	I0416 01:04:00.672127   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.672134   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:00.672141   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:00.672201   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:00.715040   62139 cri.go:89] found id: ""
	I0416 01:04:00.715064   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.715072   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:00.715078   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:00.715139   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:00.751113   62139 cri.go:89] found id: ""
	I0416 01:04:00.751137   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.751146   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:00.751152   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:00.751204   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:00.787613   62139 cri.go:89] found id: ""
	I0416 01:04:00.787644   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.787653   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:00.787660   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:00.787721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:00.824244   62139 cri.go:89] found id: ""
	I0416 01:04:00.824271   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.824280   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:00.824291   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:00.824304   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:00.899977   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:00.900014   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:00.900029   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:00.982317   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:00.982350   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:01.026354   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:01.026393   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:01.080393   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:01.080441   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:58.720894   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:00.720961   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:59.329460   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:01.330171   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:59.119259   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:01.619026   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:03.595966   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:03.609190   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:03.609253   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:03.647151   62139 cri.go:89] found id: ""
	I0416 01:04:03.647183   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.647197   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:03.647203   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:03.647250   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:03.685211   62139 cri.go:89] found id: ""
	I0416 01:04:03.685239   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.685248   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:03.685254   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:03.685303   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:03.720928   62139 cri.go:89] found id: ""
	I0416 01:04:03.720949   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.720956   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:03.720961   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:03.721035   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:03.759179   62139 cri.go:89] found id: ""
	I0416 01:04:03.759210   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.759220   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:03.759228   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:03.759290   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:03.795670   62139 cri.go:89] found id: ""
	I0416 01:04:03.795700   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.795710   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:03.795717   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:03.795785   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:03.832944   62139 cri.go:89] found id: ""
	I0416 01:04:03.832971   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.832980   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:03.832988   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:03.833053   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:03.869211   62139 cri.go:89] found id: ""
	I0416 01:04:03.869238   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.869248   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:03.869256   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:03.869317   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:03.905859   62139 cri.go:89] found id: ""
	I0416 01:04:03.905888   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.905896   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:03.905904   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:03.905915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:03.957057   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:03.957088   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:03.972309   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:03.972344   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:04.049927   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:04.049950   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:04.049965   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:04.136395   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:04.136435   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:06.676667   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:06.690062   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:06.690125   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:06.733734   62139 cri.go:89] found id: ""
	I0416 01:04:06.733758   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.733773   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:06.733782   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:06.733835   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:06.773112   62139 cri.go:89] found id: ""
	I0416 01:04:06.773140   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.773147   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:06.773152   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:06.773231   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:06.812786   62139 cri.go:89] found id: ""
	I0416 01:04:06.812809   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.812817   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:06.812822   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:06.812870   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:06.853995   62139 cri.go:89] found id: ""
	I0416 01:04:06.854022   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.854029   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:06.854034   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:06.854088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:06.893809   62139 cri.go:89] found id: ""
	I0416 01:04:06.893841   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.893848   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:06.893853   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:06.893909   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:06.929389   62139 cri.go:89] found id: ""
	I0416 01:04:06.929419   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.929430   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:06.929437   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:06.929518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:06.968278   62139 cri.go:89] found id: ""
	I0416 01:04:06.968303   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.968311   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:06.968316   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:06.968364   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:07.018932   62139 cri.go:89] found id: ""
	I0416 01:04:07.018965   62139 logs.go:276] 0 containers: []
	W0416 01:04:07.018976   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:07.018989   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:07.019003   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:07.083611   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:07.083645   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:03.220314   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:05.720941   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:03.830050   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:06.329416   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:03.619482   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:05.620393   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:07.110126   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:07.110152   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:07.186262   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:07.186290   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:07.186305   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:07.263139   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:07.263170   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:09.807489   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:09.822045   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:09.822110   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:09.867444   62139 cri.go:89] found id: ""
	I0416 01:04:09.867469   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.867480   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:09.867487   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:09.867538   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:09.904280   62139 cri.go:89] found id: ""
	I0416 01:04:09.904312   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.904323   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:09.904330   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:09.904389   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:09.941066   62139 cri.go:89] found id: ""
	I0416 01:04:09.941091   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.941099   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:09.941107   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:09.941189   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:09.975739   62139 cri.go:89] found id: ""
	I0416 01:04:09.975767   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.975777   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:09.975785   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:09.975844   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:10.011414   62139 cri.go:89] found id: ""
	I0416 01:04:10.011444   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.011454   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:10.011461   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:10.011528   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:10.045670   62139 cri.go:89] found id: ""
	I0416 01:04:10.045695   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.045704   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:10.045711   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:10.045777   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:10.082320   62139 cri.go:89] found id: ""
	I0416 01:04:10.082352   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.082361   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:10.082368   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:10.082428   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:10.120453   62139 cri.go:89] found id: ""
	I0416 01:04:10.120482   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.120492   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:10.120501   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:10.120515   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:10.200213   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:10.200251   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:10.251709   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:10.251742   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:10.307348   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:10.307382   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:10.321293   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:10.321319   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:10.401361   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:08.220488   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:10.221408   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:08.331985   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:10.829244   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:08.119800   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:10.121093   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:12.126420   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:12.901763   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:12.916308   62139 kubeadm.go:591] duration metric: took 4m4.703830076s to restartPrimaryControlPlane
	W0416 01:04:12.916384   62139 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:04:12.916416   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:04:12.720462   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:14.721516   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:17.220364   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:12.830409   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:15.330184   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:14.620714   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:16.622203   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:17.897436   62139 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.980993606s)
	I0416 01:04:17.897592   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:04:17.914655   62139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:04:17.927482   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:04:17.940210   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:04:17.940233   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:04:17.940274   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:04:17.951037   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:04:17.951106   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:04:17.962341   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:04:17.972436   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:04:17.972500   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:04:17.983198   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:04:17.992856   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:04:17.992912   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:04:18.003122   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:04:18.014064   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:04:18.014117   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:04:18.024854   62139 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:04:18.101381   62139 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 01:04:18.101436   62139 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:04:18.246529   62139 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:04:18.246687   62139 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:04:18.246802   62139 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:04:18.456847   62139 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:04:18.458980   62139 out.go:204]   - Generating certificates and keys ...
	I0416 01:04:18.459096   62139 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:04:18.459190   62139 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:04:18.459294   62139 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:04:18.459381   62139 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:04:18.459473   62139 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:04:18.459548   62139 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:04:18.459631   62139 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:04:18.459721   62139 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:04:18.459822   62139 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:04:18.460281   62139 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:04:18.460387   62139 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:04:18.460475   62139 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:04:18.564910   62139 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:04:18.806406   62139 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:04:18.890124   62139 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:04:19.046415   62139 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:04:19.063159   62139 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:04:19.063301   62139 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:04:19.063415   62139 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:04:19.229066   62139 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:04:19.231110   62139 out.go:204]   - Booting up control plane ...
	I0416 01:04:19.231246   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:04:19.248833   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:04:19.250340   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:04:19.251664   62139 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:04:19.254678   62139 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:04:19.221976   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:21.720239   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:17.830011   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:18.323271   61500 pod_ready.go:81] duration metric: took 4m0.000449424s for pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace to be "Ready" ...
	E0416 01:04:18.323300   61500 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0416 01:04:18.323318   61500 pod_ready.go:38] duration metric: took 4m9.009725319s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:04:18.323357   61500 kubeadm.go:591] duration metric: took 4m19.656264138s to restartPrimaryControlPlane
	W0416 01:04:18.323420   61500 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:04:18.323449   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:04:19.122802   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:21.621389   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:24.227649   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:26.720896   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:24.119577   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:26.620166   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:29.219937   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:31.220697   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:28.622399   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:31.119279   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:33.221240   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:35.221536   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:33.124909   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:35.620718   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:37.720528   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:40.220531   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:38.120415   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:40.121126   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:42.620161   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:42.719946   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:44.720203   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:47.219782   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:44.620806   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:47.119479   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:47.613243   62747 pod_ready.go:81] duration metric: took 4m0.000098534s for pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace to be "Ready" ...
	E0416 01:04:47.613279   62747 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0416 01:04:47.613297   62747 pod_ready.go:38] duration metric: took 4m12.544704519s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:04:47.613327   62747 kubeadm.go:591] duration metric: took 4m20.76891948s to restartPrimaryControlPlane
	W0416 01:04:47.613387   62747 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:04:47.613410   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:04:50.224993   61500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.901526458s)
	I0416 01:04:50.225057   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:04:50.241083   61500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:04:50.252468   61500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:04:50.263721   61500 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:04:50.263744   61500 kubeadm.go:156] found existing configuration files:
	
	I0416 01:04:50.263786   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:04:50.274550   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:04:50.274620   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:04:50.285019   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:04:50.295079   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:04:50.295151   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:04:50.306424   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:04:50.317221   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:04:50.317286   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:04:50.327783   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:04:50.338144   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:04:50.338213   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:04:50.349262   61500 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:04:50.410467   61500 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.2
	I0416 01:04:50.410597   61500 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:04:50.565288   61500 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:04:50.565442   61500 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:04:50.565580   61500 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:04:50.783173   61500 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:04:50.785219   61500 out.go:204]   - Generating certificates and keys ...
	I0416 01:04:50.785339   61500 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:04:50.785427   61500 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:04:50.785526   61500 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:04:50.785620   61500 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:04:50.785745   61500 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:04:50.785847   61500 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:04:50.785951   61500 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:04:50.786037   61500 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:04:50.786156   61500 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:04:50.786279   61500 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:04:50.786341   61500 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:04:50.786425   61500 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:04:50.868738   61500 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:04:51.024628   61500 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 01:04:51.304801   61500 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:04:51.485803   61500 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:04:51.614330   61500 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:04:51.615043   61500 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:04:51.617465   61500 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:04:49.720594   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:51.721464   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:51.619398   61500 out.go:204]   - Booting up control plane ...
	I0416 01:04:51.619519   61500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:04:51.619637   61500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:04:51.619717   61500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:04:51.640756   61500 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:04:51.643264   61500 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:04:51.643617   61500 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:04:51.796506   61500 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0416 01:04:51.796640   61500 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0416 01:04:54.220965   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:56.222571   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:52.798698   61500 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002359416s
	I0416 01:04:52.798798   61500 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0416 01:04:57.802689   61500 kubeadm.go:309] [api-check] The API server is healthy after 5.003967397s
	I0416 01:04:57.816580   61500 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 01:04:57.840465   61500 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 01:04:57.879611   61500 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 01:04:57.879906   61500 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-572602 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 01:04:57.895211   61500 kubeadm.go:309] [bootstrap-token] Using token: w1qt2t.vu77oqcsegb1grvk
	I0416 01:04:57.896829   61500 out.go:204]   - Configuring RBAC rules ...
	I0416 01:04:57.896958   61500 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 01:04:57.905289   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 01:04:57.916967   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 01:04:57.922660   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 01:04:57.926143   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 01:04:57.935222   61500 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 01:04:58.215180   61500 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 01:04:58.656120   61500 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 01:04:59.209811   61500 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 01:04:59.211274   61500 kubeadm.go:309] 
	I0416 01:04:59.211354   61500 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 01:04:59.211390   61500 kubeadm.go:309] 
	I0416 01:04:59.211489   61500 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 01:04:59.211512   61500 kubeadm.go:309] 
	I0416 01:04:59.211556   61500 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 01:04:59.211626   61500 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 01:04:59.211695   61500 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 01:04:59.211707   61500 kubeadm.go:309] 
	I0416 01:04:59.211779   61500 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 01:04:59.211789   61500 kubeadm.go:309] 
	I0416 01:04:59.211853   61500 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 01:04:59.211921   61500 kubeadm.go:309] 
	I0416 01:04:59.212030   61500 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 01:04:59.212165   61500 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 01:04:59.212269   61500 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 01:04:59.212280   61500 kubeadm.go:309] 
	I0416 01:04:59.212407   61500 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 01:04:59.212516   61500 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 01:04:59.212525   61500 kubeadm.go:309] 
	I0416 01:04:59.212656   61500 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token w1qt2t.vu77oqcsegb1grvk \
	I0416 01:04:59.212835   61500 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0416 01:04:59.212880   61500 kubeadm.go:309] 	--control-plane 
	I0416 01:04:59.212894   61500 kubeadm.go:309] 
	I0416 01:04:59.212996   61500 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 01:04:59.213007   61500 kubeadm.go:309] 
	I0416 01:04:59.213111   61500 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token w1qt2t.vu77oqcsegb1grvk \
	I0416 01:04:59.213278   61500 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0416 01:04:59.213435   61500 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:04:59.213460   61500 cni.go:84] Creating CNI manager for ""
	I0416 01:04:59.213477   61500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:04:59.215397   61500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:04:59.255478   62139 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 01:04:59.256524   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:04:59.256807   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:04:58.720339   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:01.220968   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:59.216764   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:04:59.230134   61500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:04:59.250739   61500 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 01:04:59.250773   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:04:59.250775   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-572602 minikube.k8s.io/updated_at=2024_04_16T01_04_59_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=no-preload-572602 minikube.k8s.io/primary=true
	I0416 01:04:59.462907   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:04:59.462915   61500 ops.go:34] apiserver oom_adj: -16
	I0416 01:04:59.962977   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:00.463142   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:00.963871   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:01.463866   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:01.963356   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:02.463729   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:04.257472   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:04.257756   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:03.720762   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:05.721421   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:02.963816   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:03.463370   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:03.963655   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:04.463681   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:04.963387   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:05.462926   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:05.963659   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:06.463091   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:06.963504   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:07.463783   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:07.963037   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:08.463212   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:08.963443   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:09.463179   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:09.963188   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:10.463264   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:10.963863   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:11.463051   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:11.591367   61500 kubeadm.go:1107] duration metric: took 12.340665724s to wait for elevateKubeSystemPrivileges
	W0416 01:05:11.591410   61500 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:05:11.591425   61500 kubeadm.go:393] duration metric: took 5m12.980123227s to StartCluster
	I0416 01:05:11.591451   61500 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:11.591559   61500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:05:11.593498   61500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:11.593838   61500 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:05:11.595572   61500 out.go:177] * Verifying Kubernetes components...
	I0416 01:05:11.593961   61500 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:05:11.594060   61500 config.go:182] Loaded profile config "no-preload-572602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 01:05:11.597038   61500 addons.go:69] Setting default-storageclass=true in profile "no-preload-572602"
	I0416 01:05:11.597047   61500 addons.go:69] Setting metrics-server=true in profile "no-preload-572602"
	I0416 01:05:11.597077   61500 addons.go:234] Setting addon metrics-server=true in "no-preload-572602"
	I0416 01:05:11.597081   61500 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-572602"
	W0416 01:05:11.597084   61500 addons.go:243] addon metrics-server should already be in state true
	I0416 01:05:11.597168   61500 host.go:66] Checking if "no-preload-572602" exists ...
	I0416 01:05:11.597042   61500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:05:11.597038   61500 addons.go:69] Setting storage-provisioner=true in profile "no-preload-572602"
	I0416 01:05:11.597274   61500 addons.go:234] Setting addon storage-provisioner=true in "no-preload-572602"
	W0416 01:05:11.597281   61500 addons.go:243] addon storage-provisioner should already be in state true
	I0416 01:05:11.597300   61500 host.go:66] Checking if "no-preload-572602" exists ...
	I0416 01:05:11.597516   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.597563   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.597590   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.597621   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.597621   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.597684   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.617344   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
	I0416 01:05:11.617833   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46345
	I0416 01:05:11.617853   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.618040   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I0416 01:05:11.618170   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.618385   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.618539   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.618564   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.618682   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.618708   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.618786   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.618806   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.619020   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.619035   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.619145   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.619371   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.619629   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.619663   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.619683   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.619715   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.622758   61500 addons.go:234] Setting addon default-storageclass=true in "no-preload-572602"
	W0416 01:05:11.622784   61500 addons.go:243] addon default-storageclass should already be in state true
	I0416 01:05:11.622814   61500 host.go:66] Checking if "no-preload-572602" exists ...
	I0416 01:05:11.623148   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.623182   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.640851   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44015
	I0416 01:05:11.641427   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.642008   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.642028   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.642429   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.642635   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.643204   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I0416 01:05:11.643239   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38953
	I0416 01:05:11.643578   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.643673   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.644133   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.644150   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.644398   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.644409   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.644508   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.644786   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.644823   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.645630   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 01:05:11.645797   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.645824   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.648522   61500 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 01:05:11.646649   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 01:05:11.650173   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 01:05:11.650185   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 01:05:11.650206   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 01:05:11.652524   61500 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:05:07.721798   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:08.214615   61267 pod_ready.go:81] duration metric: took 4m0.001005317s for pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace to be "Ready" ...
	E0416 01:05:08.214650   61267 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0416 01:05:08.214688   61267 pod_ready.go:38] duration metric: took 4m14.521894608s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:08.214750   61267 kubeadm.go:591] duration metric: took 4m22.563492336s to restartPrimaryControlPlane
	W0416 01:05:08.214821   61267 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:05:08.214857   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:05:11.654173   61500 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:11.654189   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:05:11.654207   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 01:05:11.654021   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.654488   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 01:05:11.654524   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.654823   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 01:05:11.655016   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 01:05:11.655159   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 01:05:11.655331   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 01:05:11.657706   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.658193   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 01:05:11.658214   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.658388   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 01:05:11.658585   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 01:05:11.658761   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 01:05:11.658937   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 01:05:11.669485   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34717
	I0416 01:05:11.669878   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.670340   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.670352   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.670714   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.670887   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.672571   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 01:05:11.672888   61500 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:11.672900   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:05:11.672912   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 01:05:11.675816   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.676163   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 01:05:11.676182   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.676335   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 01:05:11.676513   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 01:05:11.676657   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 01:05:11.676799   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 01:05:11.822229   61500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:05:11.850495   61500 node_ready.go:35] waiting up to 6m0s for node "no-preload-572602" to be "Ready" ...
	I0416 01:05:11.868828   61500 node_ready.go:49] node "no-preload-572602" has status "Ready":"True"
	I0416 01:05:11.868852   61500 node_ready.go:38] duration metric: took 18.327813ms for node "no-preload-572602" to be "Ready" ...
	I0416 01:05:11.868860   61500 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:11.877018   61500 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.884190   61500 pod_ready.go:92] pod "etcd-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:11.884221   61500 pod_ready.go:81] duration metric: took 7.173699ms for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.884234   61500 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.901639   61500 pod_ready.go:92] pod "kube-apiserver-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:11.901672   61500 pod_ready.go:81] duration metric: took 17.430111ms for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.901684   61500 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.911839   61500 pod_ready.go:92] pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:11.911871   61500 pod_ready.go:81] duration metric: took 10.178219ms for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.911885   61500 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.936265   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 01:05:11.936293   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 01:05:11.939406   61500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:11.942233   61500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:11.963094   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 01:05:11.963123   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 01:05:12.027316   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:12.027341   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 01:05:12.150413   61500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:12.387284   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.387310   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.387640   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.387665   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.387674   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.387682   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.387973   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.387991   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.395148   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.395179   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.395459   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:12.395488   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.395508   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.930331   61500 pod_ready.go:92] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:12.930362   61500 pod_ready.go:81] duration metric: took 1.01846846s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:12.930373   61500 pod_ready.go:38] duration metric: took 1.061502471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:12.930390   61500 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:05:12.930454   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:05:12.990840   61500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.048571147s)
	I0416 01:05:12.990905   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.990919   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.991246   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:12.991309   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.991323   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.991380   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.991391   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.991617   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:12.991669   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.991690   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:13.719959   61500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.569495387s)
	I0416 01:05:13.720018   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:13.720023   61500 api_server.go:72] duration metric: took 2.12614679s to wait for apiserver process to appear ...
	I0416 01:05:13.720046   61500 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:05:13.720066   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:05:13.720034   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:13.720435   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:13.720458   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:13.720469   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:13.720472   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:13.720477   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:13.720670   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:13.720681   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:13.720691   61500 addons.go:470] Verifying addon metrics-server=true in "no-preload-572602"
	I0416 01:05:13.722348   61500 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0416 01:05:13.723686   61500 addons.go:505] duration metric: took 2.129734353s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0416 01:05:13.764481   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 200:
	ok
	I0416 01:05:13.771661   61500 api_server.go:141] control plane version: v1.30.0-rc.2
	I0416 01:05:13.771690   61500 api_server.go:131] duration metric: took 51.637739ms to wait for apiserver health ...
	I0416 01:05:13.771698   61500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:05:13.812701   61500 system_pods.go:59] 9 kube-system pods found
	I0416 01:05:13.812744   61500 system_pods.go:61] "coredns-7db6d8ff4d-2b5ht" [b8d48a4c-6efd-409a-98be-3ec5bf639470] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.812753   61500 system_pods.go:61] "coredns-7db6d8ff4d-p62sn" [36768eb2-2a22-48e1-b271-f262aa64e014] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.812761   61500 system_pods.go:61] "etcd-no-preload-572602" [c9ed4f86-07f3-48d6-948c-8c4243920512] Running
	I0416 01:05:13.812765   61500 system_pods.go:61] "kube-apiserver-no-preload-572602" [a92513a3-4129-41a2-a603-4a69f4e72041] Running
	I0416 01:05:13.812768   61500 system_pods.go:61] "kube-controller-manager-no-preload-572602" [ce013e5b-5d3c-42de-8a00-c7041288740b] Running
	I0416 01:05:13.812774   61500 system_pods.go:61] "kube-proxy-6cjlc" [2c4d9303-8c08-4385-a6b9-63dda0d9a274] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0416 01:05:13.812777   61500 system_pods.go:61] "kube-scheduler-no-preload-572602" [a9f71ca2-f211-4e6d-9940-4e0af5d4287e] Running
	I0416 01:05:13.812783   61500 system_pods.go:61] "metrics-server-569cc877fc-5j5rc" [3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:13.812792   61500 system_pods.go:61] "storage-provisioner" [b9ac9c93-0e50-4598-a9c4-a12e4ff14063] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:13.812802   61500 system_pods.go:74] duration metric: took 41.098881ms to wait for pod list to return data ...
	I0416 01:05:13.812811   61500 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:05:13.847288   61500 default_sa.go:45] found service account: "default"
	I0416 01:05:13.847323   61500 default_sa.go:55] duration metric: took 34.500938ms for default service account to be created ...
	I0416 01:05:13.847335   61500 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:05:13.877107   61500 system_pods.go:86] 9 kube-system pods found
	I0416 01:05:13.877150   61500 system_pods.go:89] "coredns-7db6d8ff4d-2b5ht" [b8d48a4c-6efd-409a-98be-3ec5bf639470] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.877175   61500 system_pods.go:89] "coredns-7db6d8ff4d-p62sn" [36768eb2-2a22-48e1-b271-f262aa64e014] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.877185   61500 system_pods.go:89] "etcd-no-preload-572602" [c9ed4f86-07f3-48d6-948c-8c4243920512] Running
	I0416 01:05:13.877194   61500 system_pods.go:89] "kube-apiserver-no-preload-572602" [a92513a3-4129-41a2-a603-4a69f4e72041] Running
	I0416 01:05:13.877200   61500 system_pods.go:89] "kube-controller-manager-no-preload-572602" [ce013e5b-5d3c-42de-8a00-c7041288740b] Running
	I0416 01:05:13.877209   61500 system_pods.go:89] "kube-proxy-6cjlc" [2c4d9303-8c08-4385-a6b9-63dda0d9a274] Running
	I0416 01:05:13.877215   61500 system_pods.go:89] "kube-scheduler-no-preload-572602" [a9f71ca2-f211-4e6d-9940-4e0af5d4287e] Running
	I0416 01:05:13.877224   61500 system_pods.go:89] "metrics-server-569cc877fc-5j5rc" [3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:13.877237   61500 system_pods.go:89] "storage-provisioner" [b9ac9c93-0e50-4598-a9c4-a12e4ff14063] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:13.877257   61500 retry.go:31] will retry after 239.706522ms: missing components: kube-dns
	I0416 01:05:14.128770   61500 system_pods.go:86] 9 kube-system pods found
	I0416 01:05:14.128814   61500 system_pods.go:89] "coredns-7db6d8ff4d-2b5ht" [b8d48a4c-6efd-409a-98be-3ec5bf639470] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:14.128827   61500 system_pods.go:89] "coredns-7db6d8ff4d-p62sn" [36768eb2-2a22-48e1-b271-f262aa64e014] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:14.128836   61500 system_pods.go:89] "etcd-no-preload-572602" [c9ed4f86-07f3-48d6-948c-8c4243920512] Running
	I0416 01:05:14.128850   61500 system_pods.go:89] "kube-apiserver-no-preload-572602" [a92513a3-4129-41a2-a603-4a69f4e72041] Running
	I0416 01:05:14.128857   61500 system_pods.go:89] "kube-controller-manager-no-preload-572602" [ce013e5b-5d3c-42de-8a00-c7041288740b] Running
	I0416 01:05:14.128864   61500 system_pods.go:89] "kube-proxy-6cjlc" [2c4d9303-8c08-4385-a6b9-63dda0d9a274] Running
	I0416 01:05:14.128871   61500 system_pods.go:89] "kube-scheduler-no-preload-572602" [a9f71ca2-f211-4e6d-9940-4e0af5d4287e] Running
	I0416 01:05:14.128885   61500 system_pods.go:89] "metrics-server-569cc877fc-5j5rc" [3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:14.128893   61500 system_pods.go:89] "storage-provisioner" [b9ac9c93-0e50-4598-a9c4-a12e4ff14063] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:14.128903   61500 system_pods.go:126] duration metric: took 281.561287ms to wait for k8s-apps to be running ...
	I0416 01:05:14.128912   61500 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:05:14.128978   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:14.145557   61500 system_svc.go:56] duration metric: took 16.639555ms WaitForService to wait for kubelet
	I0416 01:05:14.145582   61500 kubeadm.go:576] duration metric: took 2.551711031s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:05:14.145605   61500 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:05:14.149984   61500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:05:14.150009   61500 node_conditions.go:123] node cpu capacity is 2
	I0416 01:05:14.150021   61500 node_conditions.go:105] duration metric: took 4.410684ms to run NodePressure ...
	I0416 01:05:14.150034   61500 start.go:240] waiting for startup goroutines ...
	I0416 01:05:14.150044   61500 start.go:245] waiting for cluster config update ...
	I0416 01:05:14.150064   61500 start.go:254] writing updated cluster config ...
	I0416 01:05:14.150354   61500 ssh_runner.go:195] Run: rm -f paused
	I0416 01:05:14.198605   61500 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.2 (minor skew: 1)
	I0416 01:05:14.200584   61500 out.go:177] * Done! kubectl is now configured to use "no-preload-572602" cluster and "default" namespace by default
	I0416 01:05:14.258629   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:14.258807   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:19.748784   62747 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.135339447s)
	I0416 01:05:19.748866   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:19.766280   62747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:05:19.777541   62747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:05:19.788086   62747 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:05:19.788112   62747 kubeadm.go:156] found existing configuration files:
	
	I0416 01:05:19.788154   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:05:19.798135   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:05:19.798211   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:05:19.809231   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:05:19.819447   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:05:19.819519   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:05:19.830223   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:05:19.840460   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:05:19.840528   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:05:19.851506   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:05:19.861422   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:05:19.861481   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:05:19.871239   62747 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:05:20.089849   62747 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:05:29.079351   62747 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 01:05:29.079435   62747 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:05:29.079534   62747 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:05:29.079679   62747 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:05:29.079817   62747 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:05:29.079934   62747 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:05:29.081701   62747 out.go:204]   - Generating certificates and keys ...
	I0416 01:05:29.081801   62747 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:05:29.081922   62747 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:05:29.082035   62747 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:05:29.082125   62747 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:05:29.082300   62747 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:05:29.082404   62747 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:05:29.082504   62747 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:05:29.082556   62747 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:05:29.082621   62747 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:05:29.082737   62747 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:05:29.082798   62747 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:05:29.082867   62747 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:05:29.082955   62747 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:05:29.083042   62747 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 01:05:29.083129   62747 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:05:29.083209   62747 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:05:29.083278   62747 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:05:29.083385   62747 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:05:29.083467   62747 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:05:29.085050   62747 out.go:204]   - Booting up control plane ...
	I0416 01:05:29.085178   62747 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:05:29.085289   62747 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:05:29.085374   62747 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:05:29.085499   62747 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:05:29.085610   62747 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:05:29.085671   62747 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:05:29.085942   62747 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:05:29.086066   62747 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003717 seconds
	I0416 01:05:29.086227   62747 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 01:05:29.086384   62747 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 01:05:29.086474   62747 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 01:05:29.086755   62747 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-617092 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 01:05:29.086843   62747 kubeadm.go:309] [bootstrap-token] Using token: 33ihar.pt6l329bwmm6yhnr
	I0416 01:05:29.088273   62747 out.go:204]   - Configuring RBAC rules ...
	I0416 01:05:29.088408   62747 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 01:05:29.088516   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 01:05:29.088712   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 01:05:29.088898   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 01:05:29.089046   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 01:05:29.089196   62747 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 01:05:29.089346   62747 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 01:05:29.089413   62747 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 01:05:29.089486   62747 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 01:05:29.089496   62747 kubeadm.go:309] 
	I0416 01:05:29.089581   62747 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 01:05:29.089591   62747 kubeadm.go:309] 
	I0416 01:05:29.089707   62747 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 01:05:29.089719   62747 kubeadm.go:309] 
	I0416 01:05:29.089768   62747 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 01:05:29.089855   62747 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 01:05:29.089932   62747 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 01:05:29.089942   62747 kubeadm.go:309] 
	I0416 01:05:29.090020   62747 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 01:05:29.090041   62747 kubeadm.go:309] 
	I0416 01:05:29.090111   62747 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 01:05:29.090120   62747 kubeadm.go:309] 
	I0416 01:05:29.090193   62747 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 01:05:29.090350   62747 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 01:05:29.090434   62747 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 01:05:29.090445   62747 kubeadm.go:309] 
	I0416 01:05:29.090560   62747 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 01:05:29.090661   62747 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 01:05:29.090667   62747 kubeadm.go:309] 
	I0416 01:05:29.090773   62747 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 33ihar.pt6l329bwmm6yhnr \
	I0416 01:05:29.090921   62747 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0416 01:05:29.090942   62747 kubeadm.go:309] 	--control-plane 
	I0416 01:05:29.090948   62747 kubeadm.go:309] 
	I0416 01:05:29.091017   62747 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 01:05:29.091034   62747 kubeadm.go:309] 
	I0416 01:05:29.091153   62747 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 33ihar.pt6l329bwmm6yhnr \
	I0416 01:05:29.091299   62747 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0416 01:05:29.091313   62747 cni.go:84] Creating CNI manager for ""
	I0416 01:05:29.091323   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:05:29.094154   62747 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:05:29.095747   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:05:29.153706   62747 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:05:29.195477   62747 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 01:05:29.195540   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:29.195540   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-617092 minikube.k8s.io/updated_at=2024_04_16T01_05_29_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=embed-certs-617092 minikube.k8s.io/primary=true
	I0416 01:05:29.551888   62747 ops.go:34] apiserver oom_adj: -16
	I0416 01:05:29.552023   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:30.053117   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:30.552298   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:31.052317   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:31.553057   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:32.052852   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:32.552921   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:34.259492   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:34.259704   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:33.052747   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:33.552301   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:34.052922   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:34.552338   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:35.052106   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:35.552911   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:36.052814   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:36.552077   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:37.052666   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:37.552057   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:38.053198   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:38.552163   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:39.052589   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:39.552701   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:40.053069   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:40.552436   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:41.053071   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:41.158552   62747 kubeadm.go:1107] duration metric: took 11.963074905s to wait for elevateKubeSystemPrivileges
	W0416 01:05:41.158601   62747 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:05:41.158611   62747 kubeadm.go:393] duration metric: took 5m14.369080866s to StartCluster
	I0416 01:05:41.158638   62747 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:41.158736   62747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:05:41.160903   62747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:41.161229   62747 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:05:41.163312   62747 out.go:177] * Verifying Kubernetes components...
	I0416 01:05:40.562916   61267 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.348033752s)
	I0416 01:05:40.562991   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:40.580700   61267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:05:40.592069   61267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:05:40.606450   61267 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:05:40.606477   61267 kubeadm.go:156] found existing configuration files:
	
	I0416 01:05:40.606531   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0416 01:05:40.617547   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:05:40.617622   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:05:40.631465   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0416 01:05:40.644464   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:05:40.644553   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:05:40.655929   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0416 01:05:40.664995   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:05:40.665059   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:05:40.674477   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0416 01:05:40.683500   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:05:40.683570   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:05:40.693774   61267 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:05:40.753612   61267 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 01:05:40.753717   61267 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:05:40.911483   61267 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:05:40.911609   61267 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:05:40.911748   61267 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:05:41.170137   61267 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:05:41.161331   62747 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:05:41.161434   62747 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:05:41.165023   62747 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-617092"
	I0416 01:05:41.165044   62747 addons.go:69] Setting metrics-server=true in profile "embed-certs-617092"
	I0416 01:05:41.165081   62747 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-617092"
	I0416 01:05:41.165084   62747 addons.go:234] Setting addon metrics-server=true in "embed-certs-617092"
	W0416 01:05:41.165090   62747 addons.go:243] addon storage-provisioner should already be in state true
	W0416 01:05:41.165091   62747 addons.go:243] addon metrics-server should already be in state true
	I0416 01:05:41.165117   62747 host.go:66] Checking if "embed-certs-617092" exists ...
	I0416 01:05:41.165052   62747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:05:41.165025   62747 addons.go:69] Setting default-storageclass=true in profile "embed-certs-617092"
	I0416 01:05:41.165117   62747 host.go:66] Checking if "embed-certs-617092" exists ...
	I0416 01:05:41.165174   62747 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-617092"
	I0416 01:05:41.165464   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.165480   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.165549   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.165569   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.165549   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.165651   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.183063   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46083
	I0416 01:05:41.183551   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.184135   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.184158   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.184578   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.185298   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.185337   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.185763   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43633
	I0416 01:05:41.185823   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46197
	I0416 01:05:41.186233   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.186400   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.186701   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.186726   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.186861   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.186881   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.187211   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.187233   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.187415   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.187763   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.187781   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.191018   62747 addons.go:234] Setting addon default-storageclass=true in "embed-certs-617092"
	W0416 01:05:41.191038   62747 addons.go:243] addon default-storageclass should already be in state true
	I0416 01:05:41.191068   62747 host.go:66] Checking if "embed-certs-617092" exists ...
	I0416 01:05:41.191346   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.191384   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.202643   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45285
	I0416 01:05:41.203122   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.203607   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.203627   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.203952   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.204124   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.204325   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45643
	I0416 01:05:41.204721   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.205188   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.205207   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.205860   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.206056   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.206084   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:05:41.208051   62747 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 01:05:41.209179   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 01:05:41.209197   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 01:05:41.207724   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:05:41.209214   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:05:41.210728   62747 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:05:41.171860   61267 out.go:204]   - Generating certificates and keys ...
	I0416 01:05:41.171969   61267 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:05:41.172043   61267 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:05:41.172139   61267 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:05:41.172803   61267 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:05:41.173065   61267 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:05:41.173653   61267 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:05:41.174077   61267 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:05:41.174586   61267 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:05:41.175034   61267 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:05:41.175570   61267 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:05:41.175888   61267 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:05:41.175968   61267 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:05:41.439471   61267 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:05:41.524693   61267 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 01:05:42.001762   61267 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:05:42.139805   61267 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:05:42.198091   61267 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:05:42.198762   61267 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:05:42.202915   61267 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:05:42.204549   61267 out.go:204]   - Booting up control plane ...
	I0416 01:05:42.204673   61267 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:05:42.204816   61267 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:05:42.205761   61267 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:05:42.225187   61267 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:05:42.225917   61267 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:05:42.225972   61267 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:05:42.367087   61267 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:05:41.210575   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34385
	I0416 01:05:41.211905   62747 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:41.211923   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:05:41.211942   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:05:41.212835   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.212972   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.213577   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.213597   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.213610   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:05:41.213628   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.214039   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.214657   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.214693   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.215005   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:05:41.215635   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.215905   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:05:41.215933   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.216058   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:05:41.216109   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:05:41.216242   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:05:41.216303   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:05:41.216447   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:05:41.216466   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:05:41.216544   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:05:41.236284   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40007
	I0416 01:05:41.237670   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.238270   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.238288   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.241258   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.241453   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.243397   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:05:41.243724   62747 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:41.243740   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:05:41.243758   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:05:41.247426   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.248034   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:05:41.248144   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.248423   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:05:41.249376   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:05:41.249600   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:05:41.249799   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:05:41.414823   62747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:05:41.436007   62747 node_ready.go:35] waiting up to 6m0s for node "embed-certs-617092" to be "Ready" ...
	I0416 01:05:41.452344   62747 node_ready.go:49] node "embed-certs-617092" has status "Ready":"True"
	I0416 01:05:41.452370   62747 node_ready.go:38] duration metric: took 16.328329ms for node "embed-certs-617092" to be "Ready" ...
	I0416 01:05:41.452382   62747 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:41.467673   62747 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.477985   62747 pod_ready.go:92] pod "etcd-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:41.478019   62747 pod_ready.go:81] duration metric: took 10.312538ms for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.478032   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.485978   62747 pod_ready.go:92] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:41.486003   62747 pod_ready.go:81] duration metric: took 7.961029ms for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.486015   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.491586   62747 pod_ready.go:92] pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:41.491608   62747 pod_ready.go:81] duration metric: took 5.584682ms for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.491619   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p4rh9" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.591874   62747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:41.630528   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 01:05:41.630554   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 01:05:41.653822   62747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:41.718742   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 01:05:41.718775   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 01:05:41.750701   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:41.750725   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 01:05:41.798873   62747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:41.961373   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:41.961415   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:41.961857   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:41.961879   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:41.961890   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:41.961909   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:41.962200   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:41.962205   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:41.962216   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:41.974163   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:41.974189   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:41.974517   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:41.974537   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:42.721070   62747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.067206266s)
	I0416 01:05:42.721119   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:42.721130   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:42.721551   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:42.721594   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:42.721613   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:42.721636   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:42.721648   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:42.721972   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:42.721987   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:42.722006   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:43.123544   62747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.324616723s)
	I0416 01:05:43.123593   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:43.123608   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:43.123867   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:43.123906   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:43.123913   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:43.123922   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:43.123928   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:43.124218   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:43.124234   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:43.124234   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:43.124255   62747 addons.go:470] Verifying addon metrics-server=true in "embed-certs-617092"
	I0416 01:05:43.125829   62747 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0416 01:05:43.127138   62747 addons.go:505] duration metric: took 1.965815007s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0416 01:05:43.536374   62747 pod_ready.go:102] pod "kube-proxy-p4rh9" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:44.000571   62747 pod_ready.go:92] pod "kube-proxy-p4rh9" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:44.000594   62747 pod_ready.go:81] duration metric: took 2.508967748s for pod "kube-proxy-p4rh9" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:44.000603   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:44.006516   62747 pod_ready.go:92] pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:44.006540   62747 pod_ready.go:81] duration metric: took 5.930755ms for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:44.006546   62747 pod_ready.go:38] duration metric: took 2.554153393s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:44.006560   62747 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:05:44.006612   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:05:44.030705   62747 api_server.go:72] duration metric: took 2.869432993s to wait for apiserver process to appear ...
	I0416 01:05:44.030737   62747 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:05:44.030759   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:05:44.035576   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 200:
	ok
	I0416 01:05:44.037948   62747 api_server.go:141] control plane version: v1.29.3
	I0416 01:05:44.037973   62747 api_server.go:131] duration metric: took 7.228106ms to wait for apiserver health ...
	I0416 01:05:44.037983   62747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:05:44.044543   62747 system_pods.go:59] 9 kube-system pods found
	I0416 01:05:44.044574   62747 system_pods.go:61] "coredns-76f75df574-2q58l" [e9b9d000-738b-4110-8757-17f76197285c] Running
	I0416 01:05:44.044581   62747 system_pods.go:61] "coredns-76f75df574-h8k4k" [1b114848-1137-4215-a966-03db39e4de23] Running
	I0416 01:05:44.044586   62747 system_pods.go:61] "etcd-embed-certs-617092" [f65e9307-4e12-4ac4-baca-7e1cfd7415d5] Running
	I0416 01:05:44.044591   62747 system_pods.go:61] "kube-apiserver-embed-certs-617092" [f55e02ce-45cf-4f6e-b8d7-7f305f22ea52] Running
	I0416 01:05:44.044596   62747 system_pods.go:61] "kube-controller-manager-embed-certs-617092" [d16739c1-36f4-4748-8533-fcc6cea0adee] Running
	I0416 01:05:44.044601   62747 system_pods.go:61] "kube-proxy-p4rh9" [42041028-d085-4ec4-8213-da3af0d5290e] Running
	I0416 01:05:44.044606   62747 system_pods.go:61] "kube-scheduler-embed-certs-617092" [d61e24fe-a5e3-41bf-b212-75764a036a26] Running
	I0416 01:05:44.044614   62747 system_pods.go:61] "metrics-server-57f55c9bc5-j5clp" [99808b2d-344f-43b7-a29c-01f0a2026aa8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:44.044623   62747 system_pods.go:61] "storage-provisioner" [5a62c0f7-0b15-48f3-9c17-d5966d39fbd5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:44.044635   62747 system_pods.go:74] duration metric: took 6.6454ms to wait for pod list to return data ...
	I0416 01:05:44.044652   62747 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:05:44.241344   62747 default_sa.go:45] found service account: "default"
	I0416 01:05:44.241370   62747 default_sa.go:55] duration metric: took 196.710973ms for default service account to be created ...
	I0416 01:05:44.241379   62747 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:05:44.450798   62747 system_pods.go:86] 9 kube-system pods found
	I0416 01:05:44.450825   62747 system_pods.go:89] "coredns-76f75df574-2q58l" [e9b9d000-738b-4110-8757-17f76197285c] Running
	I0416 01:05:44.450831   62747 system_pods.go:89] "coredns-76f75df574-h8k4k" [1b114848-1137-4215-a966-03db39e4de23] Running
	I0416 01:05:44.450835   62747 system_pods.go:89] "etcd-embed-certs-617092" [f65e9307-4e12-4ac4-baca-7e1cfd7415d5] Running
	I0416 01:05:44.450839   62747 system_pods.go:89] "kube-apiserver-embed-certs-617092" [f55e02ce-45cf-4f6e-b8d7-7f305f22ea52] Running
	I0416 01:05:44.450844   62747 system_pods.go:89] "kube-controller-manager-embed-certs-617092" [d16739c1-36f4-4748-8533-fcc6cea0adee] Running
	I0416 01:05:44.450848   62747 system_pods.go:89] "kube-proxy-p4rh9" [42041028-d085-4ec4-8213-da3af0d5290e] Running
	I0416 01:05:44.450851   62747 system_pods.go:89] "kube-scheduler-embed-certs-617092" [d61e24fe-a5e3-41bf-b212-75764a036a26] Running
	I0416 01:05:44.450858   62747 system_pods.go:89] "metrics-server-57f55c9bc5-j5clp" [99808b2d-344f-43b7-a29c-01f0a2026aa8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:44.450864   62747 system_pods.go:89] "storage-provisioner" [5a62c0f7-0b15-48f3-9c17-d5966d39fbd5] Running
	I0416 01:05:44.450871   62747 system_pods.go:126] duration metric: took 209.487599ms to wait for k8s-apps to be running ...
	I0416 01:05:44.450889   62747 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:05:44.450943   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:44.470820   62747 system_svc.go:56] duration metric: took 19.925743ms WaitForService to wait for kubelet
	I0416 01:05:44.470853   62747 kubeadm.go:576] duration metric: took 3.309585995s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:05:44.470876   62747 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:05:44.642093   62747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:05:44.642123   62747 node_conditions.go:123] node cpu capacity is 2
	I0416 01:05:44.642135   62747 node_conditions.go:105] duration metric: took 171.253415ms to run NodePressure ...
	I0416 01:05:44.642149   62747 start.go:240] waiting for startup goroutines ...
	I0416 01:05:44.642158   62747 start.go:245] waiting for cluster config update ...
	I0416 01:05:44.642171   62747 start.go:254] writing updated cluster config ...
	I0416 01:05:44.642519   62747 ssh_runner.go:195] Run: rm -f paused
	I0416 01:05:44.707141   62747 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 01:05:44.709274   62747 out.go:177] * Done! kubectl is now configured to use "embed-certs-617092" cluster and "default" namespace by default
	I0416 01:05:48.372574   61267 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002543 seconds
	I0416 01:05:48.385076   61267 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 01:05:48.406058   61267 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 01:05:48.938329   61267 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 01:05:48.938556   61267 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-653942 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 01:05:49.458321   61267 kubeadm.go:309] [bootstrap-token] Using token: 5ddaoe.tvzldvzlkbeta1a9
	I0416 01:05:49.459891   61267 out.go:204]   - Configuring RBAC rules ...
	I0416 01:05:49.460064   61267 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 01:05:49.465799   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 01:05:49.477346   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 01:05:49.482154   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 01:05:49.485769   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 01:05:49.489199   61267 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 01:05:49.504774   61267 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 01:05:49.770133   61267 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 01:05:49.872777   61267 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 01:05:49.874282   61267 kubeadm.go:309] 
	I0416 01:05:49.874384   61267 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 01:05:49.874400   61267 kubeadm.go:309] 
	I0416 01:05:49.874560   61267 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 01:05:49.874580   61267 kubeadm.go:309] 
	I0416 01:05:49.874602   61267 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 01:05:49.874673   61267 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 01:05:49.874754   61267 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 01:05:49.874766   61267 kubeadm.go:309] 
	I0416 01:05:49.874853   61267 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 01:05:49.874878   61267 kubeadm.go:309] 
	I0416 01:05:49.874944   61267 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 01:05:49.874956   61267 kubeadm.go:309] 
	I0416 01:05:49.875019   61267 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 01:05:49.875141   61267 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 01:05:49.875246   61267 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 01:05:49.875257   61267 kubeadm.go:309] 
	I0416 01:05:49.875432   61267 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 01:05:49.875552   61267 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 01:05:49.875562   61267 kubeadm.go:309] 
	I0416 01:05:49.875657   61267 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 5ddaoe.tvzldvzlkbeta1a9 \
	I0416 01:05:49.875754   61267 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0416 01:05:49.875774   61267 kubeadm.go:309] 	--control-plane 
	I0416 01:05:49.875780   61267 kubeadm.go:309] 
	I0416 01:05:49.875859   61267 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 01:05:49.875869   61267 kubeadm.go:309] 
	I0416 01:05:49.875949   61267 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 5ddaoe.tvzldvzlkbeta1a9 \
	I0416 01:05:49.876085   61267 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0416 01:05:49.876640   61267 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:05:49.876666   61267 cni.go:84] Creating CNI manager for ""
	I0416 01:05:49.876676   61267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:05:49.878703   61267 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:05:49.880070   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:05:49.897752   61267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:05:49.969146   61267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 01:05:49.969228   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:49.969228   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-653942 minikube.k8s.io/updated_at=2024_04_16T01_05_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=default-k8s-diff-port-653942 minikube.k8s.io/primary=true
	I0416 01:05:50.233119   61267 ops.go:34] apiserver oom_adj: -16
	I0416 01:05:50.233262   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:50.733748   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:51.234361   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:51.733704   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:52.233367   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:52.733789   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:53.234012   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:53.733458   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:54.233341   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:54.734148   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:55.233710   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:55.734135   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:56.233315   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:56.734162   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:57.233899   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:57.733337   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:58.234101   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:58.734357   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:59.233831   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:59.733286   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:00.233847   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:00.733872   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:01.233935   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:01.733629   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:02.233967   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:02.734163   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:03.233294   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:03.412834   61267 kubeadm.go:1107] duration metric: took 13.44368469s to wait for elevateKubeSystemPrivileges
	W0416 01:06:03.412896   61267 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:06:03.412907   61267 kubeadm.go:393] duration metric: took 5m17.8108087s to StartCluster
	I0416 01:06:03.412926   61267 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:06:03.413003   61267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:06:03.414974   61267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:06:03.415299   61267 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.216 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:06:03.417148   61267 out.go:177] * Verifying Kubernetes components...
	I0416 01:06:03.415390   61267 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:06:03.415510   61267 config.go:182] Loaded profile config "default-k8s-diff-port-653942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:06:03.417238   61267 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-653942"
	I0416 01:06:03.419134   61267 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-653942"
	W0416 01:06:03.419147   61267 addons.go:243] addon storage-provisioner should already be in state true
	I0416 01:06:03.417247   61267 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-653942"
	I0416 01:06:03.419188   61267 host.go:66] Checking if "default-k8s-diff-port-653942" exists ...
	I0416 01:06:03.419214   61267 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-653942"
	I0416 01:06:03.417245   61267 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-653942"
	I0416 01:06:03.419095   61267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0416 01:06:03.419262   61267 addons.go:243] addon metrics-server should already be in state true
	I0416 01:06:03.419307   61267 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-653942"
	I0416 01:06:03.419327   61267 host.go:66] Checking if "default-k8s-diff-port-653942" exists ...
	I0416 01:06:03.419606   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.419644   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.419662   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.419698   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.419722   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.419756   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.435784   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
	I0416 01:06:03.435800   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37505
	I0416 01:06:03.436294   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.436296   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.436811   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.436838   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.437097   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.437115   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.437203   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.437683   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.437757   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.437790   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.438213   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33329
	I0416 01:06:03.438248   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.438273   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.438786   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.439301   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.439332   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.439810   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.440162   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.443879   61267 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-653942"
	W0416 01:06:03.443906   61267 addons.go:243] addon default-storageclass should already be in state true
	I0416 01:06:03.443941   61267 host.go:66] Checking if "default-k8s-diff-port-653942" exists ...
	I0416 01:06:03.444301   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.444340   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.454673   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I0416 01:06:03.455111   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.455715   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.455742   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.456116   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.456318   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.457870   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39341
	I0416 01:06:03.458086   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:06:03.458278   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.462516   61267 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 01:06:03.458862   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.460354   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0416 01:06:03.464491   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 01:06:03.464509   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 01:06:03.464529   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:06:03.464551   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.464960   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.465281   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.465552   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.466181   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.466205   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.466760   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.467410   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.467435   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.467638   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:06:03.469647   61267 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:06:03.471009   61267 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:06:03.471024   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:06:03.469242   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.471040   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:06:03.469767   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:06:03.471070   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:06:03.471133   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.471297   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:06:03.471478   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:06:03.471661   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:06:03.473778   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.474203   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:06:03.474226   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.474421   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:06:03.474605   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:06:03.474784   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:06:03.474958   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:06:03.485829   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0416 01:06:03.486293   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.486876   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.486900   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.487362   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.487535   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.489207   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:06:03.489529   61267 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:06:03.489549   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:06:03.489568   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:06:03.492570   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.492932   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:06:03.492958   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.493224   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:06:03.493379   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:06:03.493557   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:06:03.493673   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:06:03.680085   61267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:06:03.724011   61267 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-653942" to be "Ready" ...
	I0416 01:06:03.739131   61267 node_ready.go:49] node "default-k8s-diff-port-653942" has status "Ready":"True"
	I0416 01:06:03.739152   61267 node_ready.go:38] duration metric: took 15.111832ms for node "default-k8s-diff-port-653942" to be "Ready" ...
	I0416 01:06:03.739161   61267 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:06:03.748081   61267 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-5nnpv" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:03.810063   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 01:06:03.810090   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 01:06:03.812595   61267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:06:03.848165   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 01:06:03.848187   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 01:06:03.991110   61267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:06:03.997100   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:06:03.997133   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 01:06:04.093267   61267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:06:04.349978   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:04.350011   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:04.350336   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:04.350396   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:04.350415   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:04.350420   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:04.350425   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:04.350683   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:04.350699   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:04.416648   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:04.416674   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:04.416982   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:04.417001   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.206973   61267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113663167s)
	I0416 01:06:05.207025   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207040   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207039   61267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.215892308s)
	I0416 01:06:05.207078   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207090   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207371   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.207388   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.207397   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207405   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207445   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.207462   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:05.207466   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.207490   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207508   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207610   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.207644   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.207654   61267 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-653942"
	I0416 01:06:05.207654   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:05.209411   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:05.209402   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.209469   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.212071   61267 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0416 01:06:05.213412   61267 addons.go:505] duration metric: took 1.798038731s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0416 01:06:05.256497   61267 pod_ready.go:92] pod "coredns-76f75df574-5nnpv" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.256526   61267 pod_ready.go:81] duration metric: took 1.508419977s for pod "coredns-76f75df574-5nnpv" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.256538   61267 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-zpnhs" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.262092   61267 pod_ready.go:92] pod "coredns-76f75df574-zpnhs" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.262112   61267 pod_ready.go:81] duration metric: took 5.566499ms for pod "coredns-76f75df574-zpnhs" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.262121   61267 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.267256   61267 pod_ready.go:92] pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.267278   61267 pod_ready.go:81] duration metric: took 5.149782ms for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.267286   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.272119   61267 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.272144   61267 pod_ready.go:81] duration metric: took 4.851008ms for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.272155   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.328440   61267 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.328470   61267 pod_ready.go:81] duration metric: took 56.30531ms for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.328482   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mg5km" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.729518   61267 pod_ready.go:92] pod "kube-proxy-mg5km" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.729544   61267 pod_ready.go:81] duration metric: took 401.055058ms for pod "kube-proxy-mg5km" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.729553   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:06.127535   61267 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:06.127558   61267 pod_ready.go:81] duration metric: took 397.998988ms for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:06.127565   61267 pod_ready.go:38] duration metric: took 2.388395448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:06:06.127577   61267 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:06:06.127620   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:06:06.150179   61267 api_server.go:72] duration metric: took 2.734842767s to wait for apiserver process to appear ...
	I0416 01:06:06.150208   61267 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:06:06.150226   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:06:06.154310   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 200:
	ok
	I0416 01:06:06.155393   61267 api_server.go:141] control plane version: v1.29.3
	I0416 01:06:06.155409   61267 api_server.go:131] duration metric: took 5.194458ms to wait for apiserver health ...
	I0416 01:06:06.155421   61267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:06:06.333873   61267 system_pods.go:59] 9 kube-system pods found
	I0416 01:06:06.333909   61267 system_pods.go:61] "coredns-76f75df574-5nnpv" [3350aca5-639e-44a1-bd84-d1e4b6486143] Running
	I0416 01:06:06.333914   61267 system_pods.go:61] "coredns-76f75df574-zpnhs" [990672b6-bb3a-4f91-8de7-7c2ec224c94a] Running
	I0416 01:06:06.333917   61267 system_pods.go:61] "etcd-default-k8s-diff-port-653942" [e72e89e9-c274-4d4d-b1f9-43bea95cd015] Running
	I0416 01:06:06.333920   61267 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653942" [c1652126-b4c2-41cf-a574-9784f7800374] Running
	I0416 01:06:06.333923   61267 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653942" [1f43936c-ba39-44f9-b9b7-2a149f26a880] Running
	I0416 01:06:06.333926   61267 system_pods.go:61] "kube-proxy-mg5km" [74764194-1f31-40b1-90b5-497e248ab7da] Running
	I0416 01:06:06.333929   61267 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653942" [48058ade-c30d-4dc9-b6c0-32b2ed5fc88a] Running
	I0416 01:06:06.333935   61267 system_pods.go:61] "metrics-server-57f55c9bc5-6jn29" [1eec2ffb-ce59-45cb-b6b4-cd010549510e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:06:06.333938   61267 system_pods.go:61] "storage-provisioner" [d131c1fc-9124-4b46-a16f-a8fb5029a57b] Running
	I0416 01:06:06.333947   61267 system_pods.go:74] duration metric: took 178.520515ms to wait for pod list to return data ...
	I0416 01:06:06.333953   61267 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:06:06.528119   61267 default_sa.go:45] found service account: "default"
	I0416 01:06:06.528148   61267 default_sa.go:55] duration metric: took 194.18786ms for default service account to be created ...
	I0416 01:06:06.528158   61267 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:06:06.731573   61267 system_pods.go:86] 9 kube-system pods found
	I0416 01:06:06.731600   61267 system_pods.go:89] "coredns-76f75df574-5nnpv" [3350aca5-639e-44a1-bd84-d1e4b6486143] Running
	I0416 01:06:06.731606   61267 system_pods.go:89] "coredns-76f75df574-zpnhs" [990672b6-bb3a-4f91-8de7-7c2ec224c94a] Running
	I0416 01:06:06.731610   61267 system_pods.go:89] "etcd-default-k8s-diff-port-653942" [e72e89e9-c274-4d4d-b1f9-43bea95cd015] Running
	I0416 01:06:06.731614   61267 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-653942" [c1652126-b4c2-41cf-a574-9784f7800374] Running
	I0416 01:06:06.731619   61267 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-653942" [1f43936c-ba39-44f9-b9b7-2a149f26a880] Running
	I0416 01:06:06.731622   61267 system_pods.go:89] "kube-proxy-mg5km" [74764194-1f31-40b1-90b5-497e248ab7da] Running
	I0416 01:06:06.731626   61267 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-653942" [48058ade-c30d-4dc9-b6c0-32b2ed5fc88a] Running
	I0416 01:06:06.731633   61267 system_pods.go:89] "metrics-server-57f55c9bc5-6jn29" [1eec2ffb-ce59-45cb-b6b4-cd010549510e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:06:06.731638   61267 system_pods.go:89] "storage-provisioner" [d131c1fc-9124-4b46-a16f-a8fb5029a57b] Running
	I0416 01:06:06.731649   61267 system_pods.go:126] duration metric: took 203.485273ms to wait for k8s-apps to be running ...
	I0416 01:06:06.731659   61267 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:06:06.731700   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:06:06.749013   61267 system_svc.go:56] duration metric: took 17.343008ms WaitForService to wait for kubelet
	I0416 01:06:06.749048   61267 kubeadm.go:576] duration metric: took 3.333716529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:06:06.749072   61267 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:06:06.927701   61267 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:06:06.927725   61267 node_conditions.go:123] node cpu capacity is 2
	I0416 01:06:06.927735   61267 node_conditions.go:105] duration metric: took 178.65899ms to run NodePressure ...
	I0416 01:06:06.927746   61267 start.go:240] waiting for startup goroutines ...
	I0416 01:06:06.927754   61267 start.go:245] waiting for cluster config update ...
	I0416 01:06:06.927763   61267 start.go:254] writing updated cluster config ...
	I0416 01:06:06.928000   61267 ssh_runner.go:195] Run: rm -f paused
	I0416 01:06:06.978823   61267 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 01:06:06.981011   61267 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-653942" cluster and "default" namespace by default
	I0416 01:06:14.261576   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:06:14.261834   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:06:14.261849   62139 kubeadm.go:309] 
	I0416 01:06:14.261890   62139 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 01:06:14.261973   62139 kubeadm.go:309] 		timed out waiting for the condition
	I0416 01:06:14.262006   62139 kubeadm.go:309] 
	I0416 01:06:14.262051   62139 kubeadm.go:309] 	This error is likely caused by:
	I0416 01:06:14.262082   62139 kubeadm.go:309] 		- The kubelet is not running
	I0416 01:06:14.262174   62139 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 01:06:14.262199   62139 kubeadm.go:309] 
	I0416 01:06:14.262357   62139 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 01:06:14.262414   62139 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 01:06:14.262471   62139 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 01:06:14.262481   62139 kubeadm.go:309] 
	I0416 01:06:14.262610   62139 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 01:06:14.262707   62139 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 01:06:14.262717   62139 kubeadm.go:309] 
	I0416 01:06:14.262867   62139 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 01:06:14.263010   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 01:06:14.263142   62139 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 01:06:14.263211   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 01:06:14.263234   62139 kubeadm.go:309] 
	I0416 01:06:14.264084   62139 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:06:14.264204   62139 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 01:06:14.264312   62139 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0416 01:06:14.264460   62139 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0416 01:06:14.264526   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:06:15.653692   62139 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.389136497s)
	I0416 01:06:15.653831   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:06:15.669141   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:06:15.679485   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:06:15.679511   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:06:15.679556   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:06:15.689898   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:06:15.689974   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:06:15.700563   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:06:15.710363   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:06:15.710445   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:06:15.719877   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:06:15.728947   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:06:15.729002   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:06:15.739360   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:06:15.749479   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:06:15.749557   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:06:15.760930   62139 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:06:16.000974   62139 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:08:12.327133   62139 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 01:08:12.327246   62139 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0416 01:08:12.328995   62139 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 01:08:12.329092   62139 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:08:12.329220   62139 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:08:12.329302   62139 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:08:12.329440   62139 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:08:12.329537   62139 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:08:12.331381   62139 out.go:204]   - Generating certificates and keys ...
	I0416 01:08:12.331474   62139 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:08:12.331558   62139 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:08:12.331658   62139 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:08:12.331742   62139 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:08:12.331830   62139 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:08:12.331910   62139 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:08:12.331968   62139 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:08:12.332020   62139 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:08:12.332085   62139 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:08:12.332159   62139 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:08:12.332210   62139 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:08:12.332297   62139 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:08:12.332376   62139 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:08:12.332466   62139 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:08:12.332547   62139 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:08:12.332642   62139 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:08:12.332790   62139 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:08:12.332895   62139 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:08:12.332938   62139 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:08:12.333002   62139 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:08:12.334632   62139 out.go:204]   - Booting up control plane ...
	I0416 01:08:12.334737   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:08:12.334837   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:08:12.334928   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:08:12.335009   62139 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:08:12.335162   62139 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:08:12.335241   62139 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 01:08:12.335333   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.335541   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.335613   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.335771   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.335848   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336035   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336109   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336365   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336438   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336704   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336716   62139 kubeadm.go:309] 
	I0416 01:08:12.336779   62139 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 01:08:12.336827   62139 kubeadm.go:309] 		timed out waiting for the condition
	I0416 01:08:12.336834   62139 kubeadm.go:309] 
	I0416 01:08:12.336883   62139 kubeadm.go:309] 	This error is likely caused by:
	I0416 01:08:12.336922   62139 kubeadm.go:309] 		- The kubelet is not running
	I0416 01:08:12.337025   62139 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 01:08:12.337036   62139 kubeadm.go:309] 
	I0416 01:08:12.337145   62139 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 01:08:12.337211   62139 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 01:08:12.337245   62139 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 01:08:12.337253   62139 kubeadm.go:309] 
	I0416 01:08:12.337340   62139 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 01:08:12.337428   62139 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 01:08:12.337436   62139 kubeadm.go:309] 
	I0416 01:08:12.337529   62139 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 01:08:12.337602   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 01:08:12.337701   62139 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 01:08:12.337870   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 01:08:12.337957   62139 kubeadm.go:393] duration metric: took 8m4.174818047s to StartCluster
	I0416 01:08:12.337969   62139 kubeadm.go:309] 
	I0416 01:08:12.338009   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:08:12.338067   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:08:12.391937   62139 cri.go:89] found id: ""
	I0416 01:08:12.391963   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.391986   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:08:12.391994   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:08:12.392072   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:08:12.430575   62139 cri.go:89] found id: ""
	I0416 01:08:12.430602   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.430616   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:08:12.430623   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:08:12.430685   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:08:12.469115   62139 cri.go:89] found id: ""
	I0416 01:08:12.469143   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.469152   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:08:12.469173   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:08:12.469228   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:08:12.508599   62139 cri.go:89] found id: ""
	I0416 01:08:12.508630   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.508640   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:08:12.508648   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:08:12.508698   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:08:12.547785   62139 cri.go:89] found id: ""
	I0416 01:08:12.547817   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.547829   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:08:12.547836   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:08:12.547910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:08:12.599526   62139 cri.go:89] found id: ""
	I0416 01:08:12.599549   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.599557   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:08:12.599563   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:08:12.599612   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:08:12.639914   62139 cri.go:89] found id: ""
	I0416 01:08:12.639944   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.639954   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:08:12.639962   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:08:12.640041   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:08:12.676025   62139 cri.go:89] found id: ""
	I0416 01:08:12.676057   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.676066   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:08:12.676079   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:08:12.676100   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:08:12.774744   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:08:12.774769   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:08:12.774785   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:08:12.902751   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:08:12.902787   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:08:12.947370   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:08:12.947406   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:08:13.002186   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:08:13.002223   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0416 01:08:13.017193   62139 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0416 01:08:13.017234   62139 out.go:239] * 
	W0416 01:08:13.017283   62139 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 01:08:13.017304   62139 out.go:239] * 
	W0416 01:08:13.018151   62139 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 01:08:13.021371   62139 out.go:177] 
	W0416 01:08:13.022572   62139 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 01:08:13.022640   62139 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0416 01:08:13.022670   62139 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0416 01:08:13.024248   62139 out.go:177] 
	
	
	==> CRI-O <==
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.743639368Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230086743614821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a880f715-eb94-48f2-bbc7-b68eb40df261 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.744195425Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0e7c5af-a0bc-4e2a-a574-19dab69a7a32 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.744271795Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0e7c5af-a0bc-4e2a-a574-19dab69a7a32 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.744463056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4572e9cdc29a024c0d665ea3122bbf4ee193ce62bec3d6fca7a84a3da8eea5d,PodSandboxId:ca0de572a57af289b5353e09c08fe81ebce442abf756b5868f360761282894ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229543272787433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a62c0f7-0b15-48f3-9c17-d5966d39fbd5,},Annotations:map[string]string{io.kubernetes.container.hash: f0fa9704,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74a3cc406377588df21cbb51972b33a94ceab9b1b709bf57ff2d56c7d603bc0,PodSandboxId:50a34e4f6bf31ec69fcc63e1c7df992f38dce685025056abd39be373db88db27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713229543173787577,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4rh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42041028-d085-4ec4-8213-da3af0d5290e,},Annotations:map[string]string{io.kubernetes.container.hash: 895cdccf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaab3d6a27de7dc9412e8a6f0abc5e141962215eedfcf7197ee943e73841af99,PodSandboxId:e018f904305989f1ff1d93597179b4da15f71f64549bbfeba8a039ff003c7256,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229542630118009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2q58l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b9d000-738b-4110-8757-17f76197285c,},Annotations:map[string]string{io.kubernetes.container.hash: 53bd1eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8ea56f2c871eedf837c4dfbcf307e3cfefa4bab9fffb2613a7f0bad633a078,PodSandboxId:dba64b21492824747924f5f579e60784f329ad8168e49a1cc96794b5714a926f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229542484974548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h8k4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b114848-1137-4215-a966-03db39e4de2
3,},Annotations:map[string]string{io.kubernetes.container.hash: 497ba5dc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f819e84c3f08a4c2272bb2247c65424f82f07babbba6f7fd68910737a5953cf,PodSandboxId:9843f30af44e8e3a5fa255d77f9a935203e21b0222351e47e9d585dc2502b2b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713229522703278815,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3908ca4b3abece521c999b86b56464ea,},Annotations:map[string]string{io.kubernetes.container.hash: 696a4a67,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6569527c88ca91f8d9ea6edee36c9ea96e78bc7d69793f16c09d414891161c2d,PodSandboxId:545a1d31fa95190f89c49c1ae83662055277e377003dad0deee6494cb4f5197b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713229522682071161,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5fd98642218f1bf3a202b613a4a213c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51303ba689e3975c0dd24bfae353857af5481682a7b169f7619c6551935453c9,PodSandboxId:33f94eee1c600853c87144dfa6c4bc116904080ef784c545280202335d9b5391,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713229522606096489,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01546522de12db89593554cc2fff4a64,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e66ec3e722b10292730106f83fdc1422f913525d80890e068ee2fcb28cb206,PodSandboxId:4fe0b00952f395f31444782dd4149d96c803fb591738f75e76802e150bf0acd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713229522577254165,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856b93b30da13b56956630be0aa3ea75,},Annotations:map[string]string{io.kubernetes.container.hash: e92b43ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2cd3bd95b7310f36c549766f75579e6906e2f74ed2283dadddd6e622ac6952,PodSandboxId:2f0e6d5deddfe9330efaaeb1ffd8fcdd0691c1c1cc1ff3bc7f4501fb6edc7fd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713229229301969967,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856b93b30da13b56956630be0aa3ea75,},Annotations:map[string]string{io.kubernetes.container.hash: e92b43ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0e7c5af-a0bc-4e2a-a574-19dab69a7a32 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.785211728Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8195070d-2524-43fa-adf2-60c0e8121e24 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.785471200Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8195070d-2524-43fa-adf2-60c0e8121e24 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.786662719Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9fe1f3c4-3f04-4c83-bd74-7a5748a4a4e9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.787199962Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230086787118842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9fe1f3c4-3f04-4c83-bd74-7a5748a4a4e9 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.787829545Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4567394e-7874-4545-8b31-6541d37b6889 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.787899387Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4567394e-7874-4545-8b31-6541d37b6889 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.788084669Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4572e9cdc29a024c0d665ea3122bbf4ee193ce62bec3d6fca7a84a3da8eea5d,PodSandboxId:ca0de572a57af289b5353e09c08fe81ebce442abf756b5868f360761282894ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229543272787433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a62c0f7-0b15-48f3-9c17-d5966d39fbd5,},Annotations:map[string]string{io.kubernetes.container.hash: f0fa9704,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74a3cc406377588df21cbb51972b33a94ceab9b1b709bf57ff2d56c7d603bc0,PodSandboxId:50a34e4f6bf31ec69fcc63e1c7df992f38dce685025056abd39be373db88db27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713229543173787577,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4rh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42041028-d085-4ec4-8213-da3af0d5290e,},Annotations:map[string]string{io.kubernetes.container.hash: 895cdccf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaab3d6a27de7dc9412e8a6f0abc5e141962215eedfcf7197ee943e73841af99,PodSandboxId:e018f904305989f1ff1d93597179b4da15f71f64549bbfeba8a039ff003c7256,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229542630118009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2q58l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b9d000-738b-4110-8757-17f76197285c,},Annotations:map[string]string{io.kubernetes.container.hash: 53bd1eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8ea56f2c871eedf837c4dfbcf307e3cfefa4bab9fffb2613a7f0bad633a078,PodSandboxId:dba64b21492824747924f5f579e60784f329ad8168e49a1cc96794b5714a926f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229542484974548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h8k4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b114848-1137-4215-a966-03db39e4de2
3,},Annotations:map[string]string{io.kubernetes.container.hash: 497ba5dc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f819e84c3f08a4c2272bb2247c65424f82f07babbba6f7fd68910737a5953cf,PodSandboxId:9843f30af44e8e3a5fa255d77f9a935203e21b0222351e47e9d585dc2502b2b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713229522703278815,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3908ca4b3abece521c999b86b56464ea,},Annotations:map[string]string{io.kubernetes.container.hash: 696a4a67,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6569527c88ca91f8d9ea6edee36c9ea96e78bc7d69793f16c09d414891161c2d,PodSandboxId:545a1d31fa95190f89c49c1ae83662055277e377003dad0deee6494cb4f5197b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713229522682071161,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5fd98642218f1bf3a202b613a4a213c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51303ba689e3975c0dd24bfae353857af5481682a7b169f7619c6551935453c9,PodSandboxId:33f94eee1c600853c87144dfa6c4bc116904080ef784c545280202335d9b5391,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713229522606096489,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01546522de12db89593554cc2fff4a64,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e66ec3e722b10292730106f83fdc1422f913525d80890e068ee2fcb28cb206,PodSandboxId:4fe0b00952f395f31444782dd4149d96c803fb591738f75e76802e150bf0acd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713229522577254165,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856b93b30da13b56956630be0aa3ea75,},Annotations:map[string]string{io.kubernetes.container.hash: e92b43ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2cd3bd95b7310f36c549766f75579e6906e2f74ed2283dadddd6e622ac6952,PodSandboxId:2f0e6d5deddfe9330efaaeb1ffd8fcdd0691c1c1cc1ff3bc7f4501fb6edc7fd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713229229301969967,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856b93b30da13b56956630be0aa3ea75,},Annotations:map[string]string{io.kubernetes.container.hash: e92b43ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4567394e-7874-4545-8b31-6541d37b6889 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.831281508Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87514aac-5cc9-4e48-9e5c-78f027413b8b name=/runtime.v1.RuntimeService/Version
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.831387329Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87514aac-5cc9-4e48-9e5c-78f027413b8b name=/runtime.v1.RuntimeService/Version
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.832717193Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8edd23b4-8346-429e-8e63-7795524c85f3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.833093253Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230086833073294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8edd23b4-8346-429e-8e63-7795524c85f3 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.833655705Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5443ce84-ca94-4417-9ee2-df943923e1e5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.833730826Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5443ce84-ca94-4417-9ee2-df943923e1e5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.833912631Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4572e9cdc29a024c0d665ea3122bbf4ee193ce62bec3d6fca7a84a3da8eea5d,PodSandboxId:ca0de572a57af289b5353e09c08fe81ebce442abf756b5868f360761282894ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229543272787433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a62c0f7-0b15-48f3-9c17-d5966d39fbd5,},Annotations:map[string]string{io.kubernetes.container.hash: f0fa9704,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74a3cc406377588df21cbb51972b33a94ceab9b1b709bf57ff2d56c7d603bc0,PodSandboxId:50a34e4f6bf31ec69fcc63e1c7df992f38dce685025056abd39be373db88db27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713229543173787577,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4rh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42041028-d085-4ec4-8213-da3af0d5290e,},Annotations:map[string]string{io.kubernetes.container.hash: 895cdccf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaab3d6a27de7dc9412e8a6f0abc5e141962215eedfcf7197ee943e73841af99,PodSandboxId:e018f904305989f1ff1d93597179b4da15f71f64549bbfeba8a039ff003c7256,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229542630118009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2q58l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b9d000-738b-4110-8757-17f76197285c,},Annotations:map[string]string{io.kubernetes.container.hash: 53bd1eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8ea56f2c871eedf837c4dfbcf307e3cfefa4bab9fffb2613a7f0bad633a078,PodSandboxId:dba64b21492824747924f5f579e60784f329ad8168e49a1cc96794b5714a926f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229542484974548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h8k4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b114848-1137-4215-a966-03db39e4de2
3,},Annotations:map[string]string{io.kubernetes.container.hash: 497ba5dc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f819e84c3f08a4c2272bb2247c65424f82f07babbba6f7fd68910737a5953cf,PodSandboxId:9843f30af44e8e3a5fa255d77f9a935203e21b0222351e47e9d585dc2502b2b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713229522703278815,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3908ca4b3abece521c999b86b56464ea,},Annotations:map[string]string{io.kubernetes.container.hash: 696a4a67,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6569527c88ca91f8d9ea6edee36c9ea96e78bc7d69793f16c09d414891161c2d,PodSandboxId:545a1d31fa95190f89c49c1ae83662055277e377003dad0deee6494cb4f5197b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713229522682071161,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5fd98642218f1bf3a202b613a4a213c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51303ba689e3975c0dd24bfae353857af5481682a7b169f7619c6551935453c9,PodSandboxId:33f94eee1c600853c87144dfa6c4bc116904080ef784c545280202335d9b5391,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713229522606096489,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01546522de12db89593554cc2fff4a64,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e66ec3e722b10292730106f83fdc1422f913525d80890e068ee2fcb28cb206,PodSandboxId:4fe0b00952f395f31444782dd4149d96c803fb591738f75e76802e150bf0acd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713229522577254165,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856b93b30da13b56956630be0aa3ea75,},Annotations:map[string]string{io.kubernetes.container.hash: e92b43ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2cd3bd95b7310f36c549766f75579e6906e2f74ed2283dadddd6e622ac6952,PodSandboxId:2f0e6d5deddfe9330efaaeb1ffd8fcdd0691c1c1cc1ff3bc7f4501fb6edc7fd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713229229301969967,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856b93b30da13b56956630be0aa3ea75,},Annotations:map[string]string{io.kubernetes.container.hash: e92b43ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5443ce84-ca94-4417-9ee2-df943923e1e5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.871777448Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cfba4e89-19c5-4b4f-8128-510959fa0f84 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.871871309Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cfba4e89-19c5-4b4f-8128-510959fa0f84 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.873217207Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ac56d4f1-3f55-4115-93b2-be3d3e04723c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.873612162Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230086873589913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ac56d4f1-3f55-4115-93b2-be3d3e04723c name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.874422771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7fce2622-6135-4f8f-902e-ddc81e00ea09 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.874497045Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7fce2622-6135-4f8f-902e-ddc81e00ea09 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:14:46 embed-certs-617092 crio[729]: time="2024-04-16 01:14:46.874694122Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4572e9cdc29a024c0d665ea3122bbf4ee193ce62bec3d6fca7a84a3da8eea5d,PodSandboxId:ca0de572a57af289b5353e09c08fe81ebce442abf756b5868f360761282894ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229543272787433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a62c0f7-0b15-48f3-9c17-d5966d39fbd5,},Annotations:map[string]string{io.kubernetes.container.hash: f0fa9704,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74a3cc406377588df21cbb51972b33a94ceab9b1b709bf57ff2d56c7d603bc0,PodSandboxId:50a34e4f6bf31ec69fcc63e1c7df992f38dce685025056abd39be373db88db27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713229543173787577,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4rh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42041028-d085-4ec4-8213-da3af0d5290e,},Annotations:map[string]string{io.kubernetes.container.hash: 895cdccf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaab3d6a27de7dc9412e8a6f0abc5e141962215eedfcf7197ee943e73841af99,PodSandboxId:e018f904305989f1ff1d93597179b4da15f71f64549bbfeba8a039ff003c7256,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229542630118009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2q58l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b9d000-738b-4110-8757-17f76197285c,},Annotations:map[string]string{io.kubernetes.container.hash: 53bd1eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8ea56f2c871eedf837c4dfbcf307e3cfefa4bab9fffb2613a7f0bad633a078,PodSandboxId:dba64b21492824747924f5f579e60784f329ad8168e49a1cc96794b5714a926f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229542484974548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h8k4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b114848-1137-4215-a966-03db39e4de2
3,},Annotations:map[string]string{io.kubernetes.container.hash: 497ba5dc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f819e84c3f08a4c2272bb2247c65424f82f07babbba6f7fd68910737a5953cf,PodSandboxId:9843f30af44e8e3a5fa255d77f9a935203e21b0222351e47e9d585dc2502b2b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713229522703278815,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3908ca4b3abece521c999b86b56464ea,},Annotations:map[string]string{io.kubernetes.container.hash: 696a4a67,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6569527c88ca91f8d9ea6edee36c9ea96e78bc7d69793f16c09d414891161c2d,PodSandboxId:545a1d31fa95190f89c49c1ae83662055277e377003dad0deee6494cb4f5197b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713229522682071161,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5fd98642218f1bf3a202b613a4a213c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51303ba689e3975c0dd24bfae353857af5481682a7b169f7619c6551935453c9,PodSandboxId:33f94eee1c600853c87144dfa6c4bc116904080ef784c545280202335d9b5391,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713229522606096489,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01546522de12db89593554cc2fff4a64,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e66ec3e722b10292730106f83fdc1422f913525d80890e068ee2fcb28cb206,PodSandboxId:4fe0b00952f395f31444782dd4149d96c803fb591738f75e76802e150bf0acd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713229522577254165,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856b93b30da13b56956630be0aa3ea75,},Annotations:map[string]string{io.kubernetes.container.hash: e92b43ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2cd3bd95b7310f36c549766f75579e6906e2f74ed2283dadddd6e622ac6952,PodSandboxId:2f0e6d5deddfe9330efaaeb1ffd8fcdd0691c1c1cc1ff3bc7f4501fb6edc7fd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713229229301969967,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856b93b30da13b56956630be0aa3ea75,},Annotations:map[string]string{io.kubernetes.container.hash: e92b43ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7fce2622-6135-4f8f-902e-ddc81e00ea09 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e4572e9cdc29a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   ca0de572a57af       storage-provisioner
	f74a3cc406377       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   9 minutes ago       Running             kube-proxy                0                   50a34e4f6bf31       kube-proxy-p4rh9
	aaab3d6a27de7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   e018f90430598       coredns-76f75df574-2q58l
	ff8ea56f2c871       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   dba64b2149282       coredns-76f75df574-h8k4k
	1f819e84c3f08       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   9843f30af44e8       etcd-embed-certs-617092
	6569527c88ca9       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   9 minutes ago       Running             kube-controller-manager   2                   545a1d31fa951       kube-controller-manager-embed-certs-617092
	51303ba689e39       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   9 minutes ago       Running             kube-scheduler            2                   33f94eee1c600       kube-scheduler-embed-certs-617092
	e4e66ec3e722b       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   9 minutes ago       Running             kube-apiserver            2                   4fe0b00952f39       kube-apiserver-embed-certs-617092
	2b2cd3bd95b73       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   14 minutes ago      Exited              kube-apiserver            1                   2f0e6d5deddfe       kube-apiserver-embed-certs-617092
	
	
	==> coredns [aaab3d6a27de7dc9412e8a6f0abc5e141962215eedfcf7197ee943e73841af99] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ff8ea56f2c871eedf837c4dfbcf307e3cfefa4bab9fffb2613a7f0bad633a078] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-617092
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-617092
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=embed-certs-617092
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T01_05_29_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 01:05:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-617092
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 01:14:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 01:10:55 +0000   Tue, 16 Apr 2024 01:05:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 01:10:55 +0000   Tue, 16 Apr 2024 01:05:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 01:10:55 +0000   Tue, 16 Apr 2024 01:05:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 01:10:55 +0000   Tue, 16 Apr 2024 01:05:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.225
	  Hostname:    embed-certs-617092
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef1adc5d828049d78e732d58bef9fedf
	  System UUID:                ef1adc5d-8280-49d7-8e73-2d58bef9fedf
	  Boot ID:                    98b33474-2495-4ce9-aa86-3f70705f2557
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-2q58l                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 coredns-76f75df574-h8k4k                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 etcd-embed-certs-617092                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-apiserver-embed-certs-617092             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-controller-manager-embed-certs-617092    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-p4rh9                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-617092             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 metrics-server-57f55c9bc5-j5clp               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m3s                   kube-proxy       
	  Normal  Starting                 9m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m25s (x8 over 9m25s)  kubelet          Node embed-certs-617092 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m25s (x8 over 9m25s)  kubelet          Node embed-certs-617092 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m25s (x7 over 9m25s)  kubelet          Node embed-certs-617092 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m18s                  kubelet          Node embed-certs-617092 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s                  kubelet          Node embed-certs-617092 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s                  kubelet          Node embed-certs-617092 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m7s                   node-controller  Node embed-certs-617092 event: Registered Node embed-certs-617092 in Controller
	
	
	==> dmesg <==
	[  +0.052884] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041433] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.066493] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.930995] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.677486] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.118314] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.058739] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077448] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.200962] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.172170] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.348318] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +4.822279] systemd-fstab-generator[812]: Ignoring "noauto" option for root device
	[  +0.067330] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.189500] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +5.638020] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.687182] kauditd_printk_skb: 79 callbacks suppressed
	[Apr16 01:05] kauditd_printk_skb: 3 callbacks suppressed
	[  +2.258288] systemd-fstab-generator[3577]: Ignoring "noauto" option for root device
	[  +4.541284] kauditd_printk_skb: 58 callbacks suppressed
	[  +2.747678] systemd-fstab-generator[3895]: Ignoring "noauto" option for root device
	[ +12.513471] systemd-fstab-generator[4099]: Ignoring "noauto" option for root device
	[  +0.115491] kauditd_printk_skb: 14 callbacks suppressed
	[Apr16 01:06] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [1f819e84c3f08a4c2272bb2247c65424f82f07babbba6f7fd68910737a5953cf] <==
	{"level":"info","ts":"2024-04-16T01:05:23.223654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b1b17dfe1a8c5a3 switched to configuration voters=(7717788636760556963)"}
	{"level":"info","ts":"2024-04-16T01:05:23.228432Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"76f03a987549979","local-member-id":"6b1b17dfe1a8c5a3","added-peer-id":"6b1b17dfe1a8c5a3","added-peer-peer-urls":["https://192.168.61.225:2380"]}
	{"level":"info","ts":"2024-04-16T01:05:23.255835Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.225:2380"}
	{"level":"info","ts":"2024-04-16T01:05:23.255885Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.225:2380"}
	{"level":"info","ts":"2024-04-16T01:05:23.255905Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-16T01:05:23.256082Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6b1b17dfe1a8c5a3","initial-advertise-peer-urls":["https://192.168.61.225:2380"],"listen-peer-urls":["https://192.168.61.225:2380"],"advertise-client-urls":["https://192.168.61.225:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.225:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T01:05:23.262384Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T01:05:23.870221Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b1b17dfe1a8c5a3 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-16T01:05:23.870283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b1b17dfe1a8c5a3 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-16T01:05:23.870308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b1b17dfe1a8c5a3 received MsgPreVoteResp from 6b1b17dfe1a8c5a3 at term 1"}
	{"level":"info","ts":"2024-04-16T01:05:23.870322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b1b17dfe1a8c5a3 became candidate at term 2"}
	{"level":"info","ts":"2024-04-16T01:05:23.870328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b1b17dfe1a8c5a3 received MsgVoteResp from 6b1b17dfe1a8c5a3 at term 2"}
	{"level":"info","ts":"2024-04-16T01:05:23.870336Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b1b17dfe1a8c5a3 became leader at term 2"}
	{"level":"info","ts":"2024-04-16T01:05:23.870343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b1b17dfe1a8c5a3 elected leader 6b1b17dfe1a8c5a3 at term 2"}
	{"level":"info","ts":"2024-04-16T01:05:23.874436Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6b1b17dfe1a8c5a3","local-member-attributes":"{Name:embed-certs-617092 ClientURLs:[https://192.168.61.225:2379]}","request-path":"/0/members/6b1b17dfe1a8c5a3/attributes","cluster-id":"76f03a987549979","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T01:05:23.874655Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T01:05:23.874738Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T01:05:23.875225Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:05:23.879786Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T01:05:23.889239Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T01:05:23.889312Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T01:05:23.891589Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.225:2379"}
	{"level":"info","ts":"2024-04-16T01:05:23.891727Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"76f03a987549979","local-member-id":"6b1b17dfe1a8c5a3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:05:23.891811Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:05:23.891856Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 01:14:47 up 14 min,  0 users,  load average: 0.20, 0.25, 0.18
	Linux embed-certs-617092 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2b2cd3bd95b7310f36c549766f75579e6906e2f74ed2283dadddd6e622ac6952] <==
	W0416 01:05:15.937312       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:15.999583       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.068665       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.122978       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.250774       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.269848       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.330785       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.349783       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.389888       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.647491       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.658796       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.688720       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.699862       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.753420       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.796622       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.837439       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.919225       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.941405       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:17.069002       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:17.120369       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:17.319054       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:17.398728       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:17.477648       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:17.545024       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:17.599880       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e4e66ec3e722b10292730106f83fdc1422f913525d80890e068ee2fcb28cb206] <==
	I0416 01:08:43.632308       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:10:25.367554       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:10:25.367723       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0416 01:10:26.368778       1 handler_proxy.go:93] no RequestInfo found in the context
	W0416 01:10:26.368870       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:10:26.369033       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 01:10:26.369046       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0416 01:10:26.369082       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 01:10:26.371214       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:11:26.369869       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:11:26.369918       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 01:11:26.369931       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:11:26.372241       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:11:26.372353       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 01:11:26.372394       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:13:26.370808       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:13:26.371328       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 01:13:26.371377       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:13:26.372534       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:13:26.372625       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 01:13:26.372652       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6569527c88ca91f8d9ea6edee36c9ea96e78bc7d69793f16c09d414891161c2d] <==
	I0416 01:09:11.036062       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:09:40.596522       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:09:41.046358       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:10:10.602286       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:10:11.055581       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:10:40.608501       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:10:41.066000       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:11:10.614902       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:11:11.076087       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0416 01:11:33.249798       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="241.512µs"
	E0416 01:11:40.627067       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:11:41.084253       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0416 01:11:45.248114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="134.073µs"
	E0416 01:12:10.632331       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:12:11.091983       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:12:40.638671       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:12:41.100575       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:13:10.644861       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:13:11.110484       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:13:40.651415       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:13:41.119061       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:14:10.659280       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:14:11.127096       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:14:40.664596       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:14:41.135706       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f74a3cc406377588df21cbb51972b33a94ceab9b1b709bf57ff2d56c7d603bc0] <==
	I0416 01:05:43.609787       1 server_others.go:72] "Using iptables proxy"
	I0416 01:05:43.645836       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.225"]
	I0416 01:05:43.748312       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 01:05:43.748463       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 01:05:43.748524       1 server_others.go:168] "Using iptables Proxier"
	I0416 01:05:43.752709       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 01:05:43.753599       1 server.go:865] "Version info" version="v1.29.3"
	I0416 01:05:43.753636       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 01:05:43.755490       1 config.go:188] "Starting service config controller"
	I0416 01:05:43.755641       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 01:05:43.766371       1 shared_informer.go:318] Caches are synced for service config
	I0416 01:05:43.756939       1 config.go:97] "Starting endpoint slice config controller"
	I0416 01:05:43.766616       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 01:05:43.766756       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 01:05:43.762073       1 config.go:315] "Starting node config controller"
	I0416 01:05:43.768491       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 01:05:43.768530       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [51303ba689e3975c0dd24bfae353857af5481682a7b169f7619c6551935453c9] <==
	E0416 01:05:25.422545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 01:05:25.422563       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 01:05:25.422553       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 01:05:25.422576       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 01:05:26.236915       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 01:05:26.237010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 01:05:26.311996       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 01:05:26.312069       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0416 01:05:26.378396       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 01:05:26.378501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 01:05:26.498315       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 01:05:26.498364       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 01:05:26.570931       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 01:05:26.571020       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 01:05:26.585442       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 01:05:26.585603       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 01:05:26.606410       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 01:05:26.606506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 01:05:26.648422       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 01:05:26.648472       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 01:05:26.727454       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 01:05:26.727505       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 01:05:26.890105       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 01:05:26.890217       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0416 01:05:29.407422       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 01:12:29 embed-certs-617092 kubelet[3902]: E0416 01:12:29.313824    3902 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 01:12:29 embed-certs-617092 kubelet[3902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 01:12:29 embed-certs-617092 kubelet[3902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 01:12:29 embed-certs-617092 kubelet[3902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 01:12:29 embed-certs-617092 kubelet[3902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 01:12:37 embed-certs-617092 kubelet[3902]: E0416 01:12:37.231101    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	Apr 16 01:12:49 embed-certs-617092 kubelet[3902]: E0416 01:12:49.232617    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	Apr 16 01:13:03 embed-certs-617092 kubelet[3902]: E0416 01:13:03.232886    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	Apr 16 01:13:18 embed-certs-617092 kubelet[3902]: E0416 01:13:18.231354    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	Apr 16 01:13:29 embed-certs-617092 kubelet[3902]: E0416 01:13:29.313377    3902 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 01:13:29 embed-certs-617092 kubelet[3902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 01:13:29 embed-certs-617092 kubelet[3902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 01:13:29 embed-certs-617092 kubelet[3902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 01:13:29 embed-certs-617092 kubelet[3902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 01:13:33 embed-certs-617092 kubelet[3902]: E0416 01:13:33.230742    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	Apr 16 01:13:48 embed-certs-617092 kubelet[3902]: E0416 01:13:48.231887    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	Apr 16 01:14:03 embed-certs-617092 kubelet[3902]: E0416 01:14:03.232642    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	Apr 16 01:14:14 embed-certs-617092 kubelet[3902]: E0416 01:14:14.230807    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	Apr 16 01:14:27 embed-certs-617092 kubelet[3902]: E0416 01:14:27.230679    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	Apr 16 01:14:29 embed-certs-617092 kubelet[3902]: E0416 01:14:29.313678    3902 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 01:14:29 embed-certs-617092 kubelet[3902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 01:14:29 embed-certs-617092 kubelet[3902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 01:14:29 embed-certs-617092 kubelet[3902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 01:14:29 embed-certs-617092 kubelet[3902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 01:14:41 embed-certs-617092 kubelet[3902]: E0416 01:14:41.230886    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	
	
	==> storage-provisioner [e4572e9cdc29a024c0d665ea3122bbf4ee193ce62bec3d6fca7a84a3da8eea5d] <==
	I0416 01:05:43.595643       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 01:05:43.675057       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 01:05:43.675224       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 01:05:43.703751       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 01:05:43.703933       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-617092_838c8a3d-3a26-4e1f-8eb9-2f38bc028b85!
	I0416 01:05:43.704893       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1f09d4c6-528f-4f5f-9f22-4bfa77107c5d", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-617092_838c8a3d-3a26-4e1f-8eb9-2f38bc028b85 became leader
	I0416 01:05:43.804093       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-617092_838c8a3d-3a26-4e1f-8eb9-2f38bc028b85!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-617092 -n embed-certs-617092
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-617092 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-j5clp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-617092 describe pod metrics-server-57f55c9bc5-j5clp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-617092 describe pod metrics-server-57f55c9bc5-j5clp: exit status 1 (61.008571ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-j5clp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-617092 describe pod metrics-server-57f55c9bc5-j5clp: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0416 01:07:20.169364   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-653942 -n default-k8s-diff-port-653942
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-04-16 01:15:07.55110686 +0000 UTC m=+5850.067868178
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653942 -n default-k8s-diff-port-653942
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-653942 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-653942 logs -n 25: (2.089176881s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p cert-expiration-359535                              | cert-expiration-359535       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:52 UTC | 16 Apr 24 00:52 UTC |
	| start   | -p newest-cni-012509 --memory=2200 --alsologtostderr   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:52 UTC | 16 Apr 24 00:53 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p newest-cni-012509             | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-012509                  | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-012509 --memory=2200 --alsologtostderr   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:54 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| image   | newest-cni-012509 image list                           | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --format=json                                          |                              |         |                |                     |                     |
	| pause   | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	| delete  | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	| delete  | -p                                                     | disable-driver-mounts-988802 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | disable-driver-mounts-988802                           |                              |         |                |                     |                     |
	| start   | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653942       | default-k8s-diff-port-653942 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-572602                  | no-preload-572602            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653942 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 01:06 UTC |
	|         | default-k8s-diff-port-653942                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-800769        | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| start   | -p no-preload-572602                                   | no-preload-572602            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 01:05 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-617092            | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-800769                              | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-800769             | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-800769                              | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-617092                 | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:58 UTC | 16 Apr 24 01:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 00:58:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 00:58:42.797832   62747 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:58:42.797983   62747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:58:42.797994   62747 out.go:304] Setting ErrFile to fd 2...
	I0416 00:58:42.797998   62747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:58:42.798182   62747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:58:42.798686   62747 out.go:298] Setting JSON to false
	I0416 00:58:42.799629   62747 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6067,"bootTime":1713223056,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 00:58:42.799687   62747 start.go:139] virtualization: kvm guest
	I0416 00:58:42.801878   62747 out.go:177] * [embed-certs-617092] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 00:58:42.803202   62747 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 00:58:42.804389   62747 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 00:58:42.803288   62747 notify.go:220] Checking for updates...
	I0416 00:58:42.805742   62747 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 00:58:42.807023   62747 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:58:42.808185   62747 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 00:58:42.809402   62747 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 00:58:42.811188   62747 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:58:42.811772   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:58:42.811833   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:58:42.826377   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44973
	I0416 00:58:42.826730   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:58:42.827217   62747 main.go:141] libmachine: Using API Version  1
	I0416 00:58:42.827233   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:58:42.827541   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:58:42.827737   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 00:58:42.827964   62747 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 00:58:42.828239   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:58:42.828274   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:58:42.842499   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
	I0416 00:58:42.842872   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:58:42.843283   62747 main.go:141] libmachine: Using API Version  1
	I0416 00:58:42.843300   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:58:42.843636   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:58:42.843830   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 00:58:42.874583   62747 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 00:58:42.875910   62747 start.go:297] selected driver: kvm2
	I0416 00:58:42.875933   62747 start.go:901] validating driver "kvm2" against &{Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:58:42.876072   62747 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 00:58:42.876741   62747 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:58:42.876826   62747 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 00:58:42.890834   62747 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 00:58:42.891212   62747 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 00:58:42.891270   62747 cni.go:84] Creating CNI manager for ""
	I0416 00:58:42.891283   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:58:42.891314   62747 start.go:340] cluster config:
	{Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-617092 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:58:42.891412   62747 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:58:42.893179   62747 out.go:177] * Starting "embed-certs-617092" primary control-plane node in "embed-certs-617092" cluster
	I0416 00:58:42.894232   62747 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 00:58:42.894260   62747 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 00:58:42.894267   62747 cache.go:56] Caching tarball of preloaded images
	I0416 00:58:42.894353   62747 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 00:58:42.894365   62747 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 00:58:42.894458   62747 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/config.json ...
	I0416 00:58:42.894628   62747 start.go:360] acquireMachinesLock for embed-certs-617092: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:58:47.545405   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:58:50.617454   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:58:56.697459   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:58:59.769461   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:05.849462   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:08.921459   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:15.001430   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:21.078070   61500 start.go:364] duration metric: took 4m33.431027521s to acquireMachinesLock for "no-preload-572602"
	I0416 00:59:21.078134   61500 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:59:21.078152   61500 fix.go:54] fixHost starting: 
	I0416 00:59:21.078760   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:59:21.078809   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:59:21.093476   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36767
	I0416 00:59:21.093934   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:59:21.094422   61500 main.go:141] libmachine: Using API Version  1
	I0416 00:59:21.094448   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:59:21.094749   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:59:21.094902   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:21.095048   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 00:59:21.096678   61500 fix.go:112] recreateIfNeeded on no-preload-572602: state=Stopped err=<nil>
	I0416 00:59:21.096697   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	W0416 00:59:21.096846   61500 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:59:21.098527   61500 out.go:177] * Restarting existing kvm2 VM for "no-preload-572602" ...
	I0416 00:59:18.073453   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:21.075633   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:59:21.075671   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 00:59:21.075991   61267 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653942"
	I0416 00:59:21.076014   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 00:59:21.076225   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 00:59:21.077923   61267 machine.go:97] duration metric: took 4m34.542024225s to provisionDockerMachine
	I0416 00:59:21.077967   61267 fix.go:56] duration metric: took 4m34.567596715s for fixHost
	I0416 00:59:21.077978   61267 start.go:83] releasing machines lock for "default-k8s-diff-port-653942", held for 4m34.567645643s
	W0416 00:59:21.078001   61267 start.go:713] error starting host: provision: host is not running
	W0416 00:59:21.078088   61267 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0416 00:59:21.078097   61267 start.go:728] Will try again in 5 seconds ...
	I0416 00:59:21.099788   61500 main.go:141] libmachine: (no-preload-572602) Calling .Start
	I0416 00:59:21.099966   61500 main.go:141] libmachine: (no-preload-572602) Ensuring networks are active...
	I0416 00:59:21.100656   61500 main.go:141] libmachine: (no-preload-572602) Ensuring network default is active
	I0416 00:59:21.100937   61500 main.go:141] libmachine: (no-preload-572602) Ensuring network mk-no-preload-572602 is active
	I0416 00:59:21.101282   61500 main.go:141] libmachine: (no-preload-572602) Getting domain xml...
	I0416 00:59:21.101905   61500 main.go:141] libmachine: (no-preload-572602) Creating domain...
	I0416 00:59:22.294019   61500 main.go:141] libmachine: (no-preload-572602) Waiting to get IP...
	I0416 00:59:22.294922   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:22.295294   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:22.295349   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:22.295262   62936 retry.go:31] will retry after 220.952312ms: waiting for machine to come up
	I0416 00:59:22.517753   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:22.518334   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:22.518358   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:22.518287   62936 retry.go:31] will retry after 377.547009ms: waiting for machine to come up
	I0416 00:59:26.081716   61267 start.go:360] acquireMachinesLock for default-k8s-diff-port-653942: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:59:22.897924   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:22.898442   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:22.898465   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:22.898394   62936 retry.go:31] will retry after 450.415086ms: waiting for machine to come up
	I0416 00:59:23.349893   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:23.350383   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:23.350420   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:23.350333   62936 retry.go:31] will retry after 385.340718ms: waiting for machine to come up
	I0416 00:59:23.736854   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:23.737225   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:23.737262   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:23.737205   62936 retry.go:31] will retry after 696.175991ms: waiting for machine to come up
	I0416 00:59:24.435231   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:24.435587   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:24.435616   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:24.435557   62936 retry.go:31] will retry after 644.402152ms: waiting for machine to come up
	I0416 00:59:25.081355   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:25.081660   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:25.081697   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:25.081626   62936 retry.go:31] will retry after 809.585997ms: waiting for machine to come up
	I0416 00:59:25.892402   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:25.892767   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:25.892797   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:25.892722   62936 retry.go:31] will retry after 1.07477705s: waiting for machine to come up
	I0416 00:59:26.969227   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:26.969617   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:26.969646   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:26.969561   62936 retry.go:31] will retry after 1.243937595s: waiting for machine to come up
	I0416 00:59:28.214995   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:28.215412   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:28.215433   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:28.215364   62936 retry.go:31] will retry after 1.775188434s: waiting for machine to come up
	I0416 00:59:29.993420   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:29.993825   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:29.993853   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:29.993779   62936 retry.go:31] will retry after 2.73873778s: waiting for machine to come up
	I0416 00:59:32.735350   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:32.735758   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:32.735809   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:32.735721   62936 retry.go:31] will retry after 2.208871896s: waiting for machine to come up
	I0416 00:59:34.947005   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:34.947400   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:34.947431   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:34.947358   62936 retry.go:31] will retry after 4.484880009s: waiting for machine to come up
	I0416 00:59:40.669954   62139 start.go:364] duration metric: took 3m18.466569456s to acquireMachinesLock for "old-k8s-version-800769"
	I0416 00:59:40.670015   62139 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:59:40.670038   62139 fix.go:54] fixHost starting: 
	I0416 00:59:40.670411   62139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:59:40.670448   62139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:59:40.686269   62139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39043
	I0416 00:59:40.686633   62139 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:59:40.687125   62139 main.go:141] libmachine: Using API Version  1
	I0416 00:59:40.687162   62139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:59:40.687481   62139 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:59:40.687672   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:40.687838   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetState
	I0416 00:59:40.689108   62139 fix.go:112] recreateIfNeeded on old-k8s-version-800769: state=Stopped err=<nil>
	I0416 00:59:40.689132   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	W0416 00:59:40.689286   62139 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:59:40.691869   62139 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-800769" ...
	I0416 00:59:40.693292   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .Start
	I0416 00:59:40.693450   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring networks are active...
	I0416 00:59:40.694152   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring network default is active
	I0416 00:59:40.694457   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring network mk-old-k8s-version-800769 is active
	I0416 00:59:40.694883   62139 main.go:141] libmachine: (old-k8s-version-800769) Getting domain xml...
	I0416 00:59:40.695720   62139 main.go:141] libmachine: (old-k8s-version-800769) Creating domain...
	I0416 00:59:41.913001   62139 main.go:141] libmachine: (old-k8s-version-800769) Waiting to get IP...
	I0416 00:59:41.913874   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:41.914260   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:41.914318   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:41.914237   63071 retry.go:31] will retry after 261.032707ms: waiting for machine to come up
	I0416 00:59:39.436244   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.436664   61500 main.go:141] libmachine: (no-preload-572602) Found IP for machine: 192.168.39.121
	I0416 00:59:39.436686   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has current primary IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.436694   61500 main.go:141] libmachine: (no-preload-572602) Reserving static IP address...
	I0416 00:59:39.437114   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "no-preload-572602", mac: "52:54:00:fb:a5:f3", ip: "192.168.39.121"} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.437151   61500 main.go:141] libmachine: (no-preload-572602) Reserved static IP address: 192.168.39.121
	I0416 00:59:39.437183   61500 main.go:141] libmachine: (no-preload-572602) DBG | skip adding static IP to network mk-no-preload-572602 - found existing host DHCP lease matching {name: "no-preload-572602", mac: "52:54:00:fb:a5:f3", ip: "192.168.39.121"}
	I0416 00:59:39.437197   61500 main.go:141] libmachine: (no-preload-572602) Waiting for SSH to be available...
	I0416 00:59:39.437215   61500 main.go:141] libmachine: (no-preload-572602) DBG | Getting to WaitForSSH function...
	I0416 00:59:39.439255   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.439613   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.439642   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.439723   61500 main.go:141] libmachine: (no-preload-572602) DBG | Using SSH client type: external
	I0416 00:59:39.439756   61500 main.go:141] libmachine: (no-preload-572602) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa (-rw-------)
	I0416 00:59:39.439799   61500 main.go:141] libmachine: (no-preload-572602) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.121 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 00:59:39.439822   61500 main.go:141] libmachine: (no-preload-572602) DBG | About to run SSH command:
	I0416 00:59:39.439835   61500 main.go:141] libmachine: (no-preload-572602) DBG | exit 0
	I0416 00:59:39.565190   61500 main.go:141] libmachine: (no-preload-572602) DBG | SSH cmd err, output: <nil>: 
	I0416 00:59:39.565584   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetConfigRaw
	I0416 00:59:39.566223   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:39.568572   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.568869   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.568906   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.569083   61500 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/config.json ...
	I0416 00:59:39.569300   61500 machine.go:94] provisionDockerMachine start ...
	I0416 00:59:39.569318   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:39.569526   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.571536   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.571842   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.571868   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.572004   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:39.572189   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.572352   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.572505   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:39.572751   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:39.572974   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:39.572991   61500 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:59:39.681544   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 00:59:39.681574   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetMachineName
	I0416 00:59:39.681845   61500 buildroot.go:166] provisioning hostname "no-preload-572602"
	I0416 00:59:39.681874   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetMachineName
	I0416 00:59:39.682088   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.684694   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.685029   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.685063   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.685259   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:39.685453   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.685608   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.685737   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:39.685887   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:39.686066   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:39.686090   61500 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-572602 && echo "no-preload-572602" | sudo tee /etc/hostname
	I0416 00:59:39.804124   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-572602
	
	I0416 00:59:39.804149   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.807081   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.807447   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.807480   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.807651   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:39.807860   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.808048   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.808202   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:39.808393   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:39.808618   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:39.808644   61500 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-572602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-572602/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-572602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:59:39.921781   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:59:39.921824   61500 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:59:39.921847   61500 buildroot.go:174] setting up certificates
	I0416 00:59:39.921857   61500 provision.go:84] configureAuth start
	I0416 00:59:39.921872   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetMachineName
	I0416 00:59:39.922150   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:39.924726   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.925052   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.925081   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.925199   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.927315   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.927820   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.927869   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.927934   61500 provision.go:143] copyHostCerts
	I0416 00:59:39.928005   61500 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:59:39.928031   61500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:59:39.928122   61500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:59:39.928231   61500 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:59:39.928241   61500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:59:39.928284   61500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:59:39.928370   61500 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:59:39.928379   61500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:59:39.928428   61500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:59:39.928498   61500 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.no-preload-572602 san=[127.0.0.1 192.168.39.121 localhost minikube no-preload-572602]
	I0416 00:59:40.000129   61500 provision.go:177] copyRemoteCerts
	I0416 00:59:40.000200   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:59:40.000236   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.002726   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.003028   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.003057   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.003168   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.003351   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.003471   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.003577   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.087468   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:59:40.115336   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0416 00:59:40.142695   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 00:59:40.169631   61500 provision.go:87] duration metric: took 247.759459ms to configureAuth
	I0416 00:59:40.169657   61500 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:59:40.169824   61500 config.go:182] Loaded profile config "no-preload-572602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 00:59:40.169906   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.172164   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.172503   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.172531   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.172689   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.172875   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.173033   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.173182   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.173311   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:40.173465   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:40.173480   61500 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:59:40.437143   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:59:40.437182   61500 machine.go:97] duration metric: took 867.868152ms to provisionDockerMachine
	I0416 00:59:40.437194   61500 start.go:293] postStartSetup for "no-preload-572602" (driver="kvm2")
	I0416 00:59:40.437211   61500 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:59:40.437233   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.437536   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:59:40.437564   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.440246   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.440596   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.440637   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.440759   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.440981   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.441186   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.441319   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.524157   61500 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:59:40.528556   61500 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:59:40.528580   61500 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:59:40.528647   61500 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:59:40.528756   61500 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:59:40.528877   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:59:40.538275   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:59:40.562693   61500 start.go:296] duration metric: took 125.48438ms for postStartSetup
	I0416 00:59:40.562728   61500 fix.go:56] duration metric: took 19.484586221s for fixHost
	I0416 00:59:40.562746   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.565410   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.565717   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.565756   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.565920   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.566103   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.566269   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.566438   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.566587   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:40.566738   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:40.566749   61500 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 00:59:40.669778   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229180.641382554
	
	I0416 00:59:40.669802   61500 fix.go:216] guest clock: 1713229180.641382554
	I0416 00:59:40.669811   61500 fix.go:229] Guest: 2024-04-16 00:59:40.641382554 +0000 UTC Remote: 2024-04-16 00:59:40.56273146 +0000 UTC m=+293.069651959 (delta=78.651094ms)
	I0416 00:59:40.669839   61500 fix.go:200] guest clock delta is within tolerance: 78.651094ms
	I0416 00:59:40.669857   61500 start.go:83] releasing machines lock for "no-preload-572602", held for 19.591740017s
	I0416 00:59:40.669883   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.670163   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:40.672800   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.673187   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.673234   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.673386   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.673841   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.673993   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.674067   61500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:59:40.674115   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.674155   61500 ssh_runner.go:195] Run: cat /version.json
	I0416 00:59:40.674174   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.676617   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.676776   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.677006   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.677030   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.677126   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.677277   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.677299   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.677336   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.677499   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.677511   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.677635   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.677768   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.678072   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.678224   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.787049   61500 ssh_runner.go:195] Run: systemctl --version
	I0416 00:59:40.793568   61500 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:59:40.941445   61500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 00:59:40.949062   61500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:59:40.949177   61500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:59:40.966425   61500 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 00:59:40.966454   61500 start.go:494] detecting cgroup driver to use...
	I0416 00:59:40.966525   61500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:59:40.985126   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:59:40.999931   61500 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:59:41.000004   61500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:59:41.015597   61500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:59:41.030610   61500 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:59:41.151240   61500 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 00:59:41.312384   61500 docker.go:233] disabling docker service ...
	I0416 00:59:41.312464   61500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 00:59:41.329263   61500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 00:59:41.345192   61500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 00:59:41.463330   61500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 00:59:41.595259   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 00:59:41.610495   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 00:59:41.632527   61500 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 00:59:41.632580   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.644625   61500 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 00:59:41.644723   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.656056   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.667069   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.682783   61500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 00:59:41.694760   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.712505   61500 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.737338   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.747518   61500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 00:59:41.756586   61500 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 00:59:41.756656   61500 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 00:59:41.769230   61500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 00:59:41.778424   61500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:59:41.894135   61500 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 00:59:42.039732   61500 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 00:59:42.039812   61500 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 00:59:42.044505   61500 start.go:562] Will wait 60s for crictl version
	I0416 00:59:42.044551   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.049632   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 00:59:42.106886   61500 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 00:59:42.106981   61500 ssh_runner.go:195] Run: crio --version
	I0416 00:59:42.137092   61500 ssh_runner.go:195] Run: crio --version
	I0416 00:59:42.170036   61500 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0416 00:59:42.171395   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:42.174790   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:42.175217   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:42.175250   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:42.175506   61500 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 00:59:42.180987   61500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:59:42.198472   61500 kubeadm.go:877] updating cluster {Name:no-preload-572602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.2 ClusterName:no-preload-572602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 00:59:42.198595   61500 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0416 00:59:42.198639   61500 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:59:42.236057   61500 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.2". assuming images are not preloaded.
	I0416 00:59:42.236084   61500 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.2 registry.k8s.io/kube-controller-manager:v1.30.0-rc.2 registry.k8s.io/kube-scheduler:v1.30.0-rc.2 registry.k8s.io/kube-proxy:v1.30.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 00:59:42.236146   61500 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:42.236166   61500 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.236180   61500 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.236182   61500 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.236212   61500 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.236238   61500 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0416 00:59:42.236287   61500 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.236164   61500 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.237740   61500 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.237756   61500 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0416 00:59:42.237763   61500 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.237779   61500 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.237740   61500 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.237848   61500 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.237847   61500 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:42.238087   61500 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.410682   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.445824   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.446874   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0416 00:59:42.448854   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.449450   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.452121   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.458966   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.480556   61500 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.2" does not exist at hash "461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6" in container runtime
	I0416 00:59:42.480608   61500 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.480670   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.176660   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.177053   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.177084   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.177031   63071 retry.go:31] will retry after 268.951362ms: waiting for machine to come up
	I0416 00:59:42.447724   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.448132   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.448159   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.448097   63071 retry.go:31] will retry after 293.793417ms: waiting for machine to come up
	I0416 00:59:42.743375   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.743845   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.743874   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.743801   63071 retry.go:31] will retry after 494.163372ms: waiting for machine to come up
	I0416 00:59:43.239314   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:43.239761   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:43.239790   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:43.239708   63071 retry.go:31] will retry after 698.851999ms: waiting for machine to come up
	I0416 00:59:43.939998   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:43.940577   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:43.940607   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:43.940535   63071 retry.go:31] will retry after 764.693004ms: waiting for machine to come up
	I0416 00:59:44.706335   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:44.706673   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:44.706724   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:44.706626   63071 retry.go:31] will retry after 874.082115ms: waiting for machine to come up
	I0416 00:59:45.581896   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:45.582331   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:45.582361   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:45.582280   63071 retry.go:31] will retry after 966.259345ms: waiting for machine to come up
	I0416 00:59:46.550671   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:46.551111   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:46.551140   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:46.551062   63071 retry.go:31] will retry after 1.191034468s: waiting for machine to come up
	I0416 00:59:42.583284   61500 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0416 00:59:42.583332   61500 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.583377   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.724785   61500 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2" does not exist at hash "ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b" in container runtime
	I0416 00:59:42.724827   61500 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.724878   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.724899   61500 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0416 00:59:42.724938   61500 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.724938   61500 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.2" does not exist at hash "35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e" in container runtime
	I0416 00:59:42.724964   61500 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.724979   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.724993   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.725019   61500 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.2" does not exist at hash "65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1" in container runtime
	I0416 00:59:42.725051   61500 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.725063   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.725088   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.725102   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.739346   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.739764   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.787888   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.787977   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.788024   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.788084   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.815167   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0416 00:59:42.815274   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0416 00:59:42.845627   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0416 00:59:42.845741   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0416 00:59:42.848065   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:42.848134   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:42.880543   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:42.880557   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2 (exists)
	I0416 00:59:42.880575   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.880628   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.880648   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:42.907207   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0416 00:59:42.907245   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0416 00:59:42.907269   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.2
	I0416 00:59:42.907295   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2 (exists)
	I0416 00:59:42.907334   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2 (exists)
	I0416 00:59:42.907350   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 00:59:43.138705   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:44.951278   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2: (2.07061835s)
	I0416 00:59:44.951295   61500 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2: (2.04392036s)
	I0416 00:59:44.951348   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2 (exists)
	I0416 00:59:44.951309   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2 from cache
	I0416 00:59:44.951364   61500 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.812619758s)
	I0416 00:59:44.951410   61500 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0416 00:59:44.951448   61500 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:44.951374   61500 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0416 00:59:44.951506   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:44.951508   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0416 00:59:47.744187   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:47.744683   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:47.744712   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:47.744637   63071 retry.go:31] will retry after 2.263605663s: waiting for machine to come up
	I0416 00:59:50.011136   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:50.011605   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:50.011632   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:50.011566   63071 retry.go:31] will retry after 2.648982849s: waiting for machine to come up
	I0416 00:59:48.656623   61500 ssh_runner.go:235] Completed: which crictl: (3.705085257s)
	I0416 00:59:48.656705   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:48.656715   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.705109475s)
	I0416 00:59:48.656743   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0416 00:59:48.656769   61500 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0416 00:59:48.656798   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0416 00:59:50.560030   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.903209359s)
	I0416 00:59:50.560071   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0416 00:59:50.560085   61500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.90335887s)
	I0416 00:59:50.560096   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:50.560148   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:50.560151   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0416 00:59:50.560309   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0416 00:59:52.662443   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:52.662852   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:52.662883   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:52.662815   63071 retry.go:31] will retry after 2.183508059s: waiting for machine to come up
	I0416 00:59:54.849225   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:54.849701   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:54.849734   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:54.849649   63071 retry.go:31] will retry after 3.201585234s: waiting for machine to come up
	I0416 00:59:52.739620   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2: (2.179436189s)
	I0416 00:59:52.739658   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2 from cache
	I0416 00:59:52.739688   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:52.739697   61500 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.179365348s)
	I0416 00:59:52.739724   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0416 00:59:52.739747   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:55.098350   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2: (2.358579586s)
	I0416 00:59:55.098381   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2 from cache
	I0416 00:59:55.098408   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 00:59:55.098454   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 00:59:57.166586   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2: (2.068105529s)
	I0416 00:59:57.166615   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.2 from cache
	I0416 00:59:57.166644   61500 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0416 00:59:57.166697   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0416 00:59:59.394339   62747 start.go:364] duration metric: took 1m16.499681915s to acquireMachinesLock for "embed-certs-617092"
	I0416 00:59:59.394389   62747 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:59:59.394412   62747 fix.go:54] fixHost starting: 
	I0416 00:59:59.394834   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:59:59.394896   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:59:59.414712   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38637
	I0416 00:59:59.415464   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:59:59.416123   62747 main.go:141] libmachine: Using API Version  1
	I0416 00:59:59.416150   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:59:59.416436   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:59:59.416623   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 00:59:59.416786   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 00:59:59.418413   62747 fix.go:112] recreateIfNeeded on embed-certs-617092: state=Stopped err=<nil>
	I0416 00:59:59.418449   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	W0416 00:59:59.418609   62747 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:59:59.420560   62747 out.go:177] * Restarting existing kvm2 VM for "embed-certs-617092" ...
	I0416 00:59:58.052613   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.053048   62139 main.go:141] libmachine: (old-k8s-version-800769) Found IP for machine: 192.168.83.98
	I0416 00:59:58.053073   62139 main.go:141] libmachine: (old-k8s-version-800769) Reserving static IP address...
	I0416 00:59:58.053089   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has current primary IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.053517   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "old-k8s-version-800769", mac: "52:54:00:a1:ad:da", ip: "192.168.83.98"} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.053549   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | skip adding static IP to network mk-old-k8s-version-800769 - found existing host DHCP lease matching {name: "old-k8s-version-800769", mac: "52:54:00:a1:ad:da", ip: "192.168.83.98"}
	I0416 00:59:58.053569   62139 main.go:141] libmachine: (old-k8s-version-800769) Reserved static IP address: 192.168.83.98
	I0416 00:59:58.053587   62139 main.go:141] libmachine: (old-k8s-version-800769) Waiting for SSH to be available...
	I0416 00:59:58.053602   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Getting to WaitForSSH function...
	I0416 00:59:58.055598   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.055907   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.055941   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.056038   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Using SSH client type: external
	I0416 00:59:58.056088   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa (-rw-------)
	I0416 00:59:58.056132   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.98 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 00:59:58.056149   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | About to run SSH command:
	I0416 00:59:58.056162   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | exit 0
	I0416 00:59:58.185675   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | SSH cmd err, output: <nil>: 
	I0416 00:59:58.186055   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetConfigRaw
	I0416 00:59:58.186802   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:58.189772   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.190219   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.190257   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.190448   62139 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/config.json ...
	I0416 00:59:58.190666   62139 machine.go:94] provisionDockerMachine start ...
	I0416 00:59:58.190685   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:58.190902   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.193570   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.193954   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.193982   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.194139   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.194337   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.194492   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.194636   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.194786   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.195041   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.195056   62139 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:59:58.321824   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 00:59:58.321857   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.322146   62139 buildroot.go:166] provisioning hostname "old-k8s-version-800769"
	I0416 00:59:58.322175   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.322381   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.324941   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.325288   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.325316   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.325423   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.325613   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.325776   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.325936   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.326109   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.326322   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.326339   62139 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-800769 && echo "old-k8s-version-800769" | sudo tee /etc/hostname
	I0416 00:59:58.455194   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-800769
	
	I0416 00:59:58.455236   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.458021   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.458423   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.458458   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.458662   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.458848   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.459013   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.459162   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.459353   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.459507   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.459524   62139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-800769' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-800769/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-800769' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:59:58.587318   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:59:58.587351   62139 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:59:58.587391   62139 buildroot.go:174] setting up certificates
	I0416 00:59:58.587400   62139 provision.go:84] configureAuth start
	I0416 00:59:58.587413   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.587686   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:58.590415   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.590739   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.590778   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.590880   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.593282   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.593728   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.593759   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.593931   62139 provision.go:143] copyHostCerts
	I0416 00:59:58.593988   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:59:58.594007   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:59:58.594079   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:59:58.594213   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:59:58.594222   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:59:58.594263   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:59:58.594372   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:59:58.594383   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:59:58.594408   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:59:58.594470   62139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-800769 san=[127.0.0.1 192.168.83.98 localhost minikube old-k8s-version-800769]
	I0416 00:59:58.692127   62139 provision.go:177] copyRemoteCerts
	I0416 00:59:58.692197   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:59:58.692232   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.694858   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.695231   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.695278   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.695507   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.695693   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.695852   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.695994   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:58.783458   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:59:58.811124   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0416 00:59:58.836495   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 00:59:58.862044   62139 provision.go:87] duration metric: took 274.632117ms to configureAuth
	I0416 00:59:58.862068   62139 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:59:58.862278   62139 config.go:182] Loaded profile config "old-k8s-version-800769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0416 00:59:58.862361   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.865352   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.865795   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.865829   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.866043   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.866228   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.866435   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.866625   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.866805   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.867008   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.867026   62139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:59:59.143874   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:59:59.143900   62139 machine.go:97] duration metric: took 953.218972ms to provisionDockerMachine
	I0416 00:59:59.143914   62139 start.go:293] postStartSetup for "old-k8s-version-800769" (driver="kvm2")
	I0416 00:59:59.143927   62139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:59:59.143972   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.144277   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:59:59.144302   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.147021   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.147355   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.147385   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.147649   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.147871   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.148036   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.148174   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.236981   62139 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:59:59.241388   62139 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:59:59.241411   62139 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:59:59.241469   62139 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:59:59.241534   62139 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:59:59.241619   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:59:59.251688   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:59:59.275189   62139 start.go:296] duration metric: took 131.262042ms for postStartSetup
	I0416 00:59:59.275227   62139 fix.go:56] duration metric: took 18.605201288s for fixHost
	I0416 00:59:59.275250   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.277804   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.278153   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.278186   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.278341   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.278581   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.278741   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.278908   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.279068   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:59.279233   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:59.279243   62139 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 00:59:59.394108   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229199.360202150
	
	I0416 00:59:59.394141   62139 fix.go:216] guest clock: 1713229199.360202150
	I0416 00:59:59.394152   62139 fix.go:229] Guest: 2024-04-16 00:59:59.36020215 +0000 UTC Remote: 2024-04-16 00:59:59.27523174 +0000 UTC m=+217.222314955 (delta=84.97041ms)
	I0416 00:59:59.394211   62139 fix.go:200] guest clock delta is within tolerance: 84.97041ms
	I0416 00:59:59.394218   62139 start.go:83] releasing machines lock for "old-k8s-version-800769", held for 18.724230851s
	I0416 00:59:59.394252   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.394554   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:59.397241   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.397670   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.397703   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.397897   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398460   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398650   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398740   62139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:59:59.398782   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.399049   62139 ssh_runner.go:195] Run: cat /version.json
	I0416 00:59:59.399072   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.401397   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401656   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401802   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.401825   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401964   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.402017   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.402089   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.402173   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.402248   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.402320   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.402376   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.402430   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.402577   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.402638   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.481834   62139 ssh_runner.go:195] Run: systemctl --version
	I0416 00:59:59.516372   62139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:59:59.666722   62139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 00:59:59.674165   62139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:59:59.674226   62139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:59:59.695545   62139 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 00:59:59.695573   62139 start.go:494] detecting cgroup driver to use...
	I0416 00:59:59.695646   62139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:59:59.715091   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:59:59.732004   62139 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:59:59.732060   62139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:59:59.753217   62139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:59:59.768513   62139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:59:59.898693   62139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:00:00.066535   62139 docker.go:233] disabling docker service ...
	I0416 01:00:00.066607   62139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:00:00.084512   62139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:00:00.097714   62139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:00:00.232901   62139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:00:00.378379   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:00:00.395191   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:00:00.416631   62139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0416 01:00:00.416695   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.428712   62139 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:00:00.428774   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.442687   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.454631   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.466151   62139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:00:00.478459   62139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:00:00.489957   62139 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:00:00.490035   62139 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:00:00.506087   62139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:00:00.518100   62139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:00.676317   62139 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:00:00.869766   62139 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:00:00.869855   62139 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:00:00.875363   62139 start.go:562] Will wait 60s for crictl version
	I0416 01:00:00.875424   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:00.880947   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:00:00.924780   62139 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:00:00.924852   62139 ssh_runner.go:195] Run: crio --version
	I0416 01:00:00.958390   62139 ssh_runner.go:195] Run: crio --version
	I0416 01:00:00.993114   62139 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0416 01:00:00.994513   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 01:00:00.997571   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 01:00:00.998032   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 01:00:00.998065   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 01:00:00.998273   62139 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0416 01:00:01.002750   62139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:01.015709   62139 kubeadm.go:877] updating cluster {Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.98 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:00:01.015810   62139 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 01:00:01.015853   62139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:01.063257   62139 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 01:00:01.063331   62139 ssh_runner.go:195] Run: which lz4
	I0416 01:00:01.067973   62139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:00:01.072369   62139 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:00:01.072400   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0416 00:59:57.817013   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0416 00:59:57.817060   61500 cache_images.go:123] Successfully loaded all cached images
	I0416 00:59:57.817073   61500 cache_images.go:92] duration metric: took 15.580967615s to LoadCachedImages
	I0416 00:59:57.817087   61500 kubeadm.go:928] updating node { 192.168.39.121 8443 v1.30.0-rc.2 crio true true} ...
	I0416 00:59:57.817241   61500 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-572602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:no-preload-572602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 00:59:57.817324   61500 ssh_runner.go:195] Run: crio config
	I0416 00:59:57.866116   61500 cni.go:84] Creating CNI manager for ""
	I0416 00:59:57.866140   61500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:59:57.866154   61500 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 00:59:57.866189   61500 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.121 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-572602 NodeName:no-preload-572602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.121 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 00:59:57.866325   61500 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.121
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-572602"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.121
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.121"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 00:59:57.866390   61500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0416 00:59:57.876619   61500 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 00:59:57.876689   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 00:59:57.886472   61500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0416 00:59:57.903172   61500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0416 00:59:57.919531   61500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0416 00:59:57.936394   61500 ssh_runner.go:195] Run: grep 192.168.39.121	control-plane.minikube.internal$ /etc/hosts
	I0416 00:59:57.940161   61500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.121	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:59:57.951997   61500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:59:58.089553   61500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 00:59:58.117870   61500 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602 for IP: 192.168.39.121
	I0416 00:59:58.117926   61500 certs.go:194] generating shared ca certs ...
	I0416 00:59:58.117949   61500 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:59:58.118136   61500 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 00:59:58.118199   61500 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 00:59:58.118216   61500 certs.go:256] generating profile certs ...
	I0416 00:59:58.118351   61500 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/client.key
	I0416 00:59:58.118446   61500 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/apiserver.key.a3b1330f
	I0416 00:59:58.118505   61500 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/proxy-client.key
	I0416 00:59:58.118664   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 00:59:58.118708   61500 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 00:59:58.118721   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 00:59:58.118756   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 00:59:58.118786   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 00:59:58.118814   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 00:59:58.118874   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:59:58.119738   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 00:59:58.150797   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 00:59:58.181693   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 00:59:58.231332   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 00:59:58.276528   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 00:59:58.301000   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 00:59:58.326090   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 00:59:58.350254   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 00:59:58.377597   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 00:59:58.401548   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 00:59:58.425237   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 00:59:58.449748   61500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 00:59:58.468346   61500 ssh_runner.go:195] Run: openssl version
	I0416 00:59:58.474164   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 00:59:58.485674   61500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 00:59:58.490136   61500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 00:59:58.490203   61500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 00:59:58.495781   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 00:59:58.507047   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 00:59:58.518007   61500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 00:59:58.522317   61500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 00:59:58.522364   61500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 00:59:58.527809   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 00:59:58.538579   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 00:59:58.549188   61500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:59:58.553688   61500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:59:58.553732   61500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:59:58.559175   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 00:59:58.570142   61500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 00:59:58.574657   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 00:59:58.580560   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 00:59:58.586319   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 00:59:58.593938   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 00:59:58.599808   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 00:59:58.605583   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 00:59:58.611301   61500 kubeadm.go:391] StartCluster: {Name:no-preload-572602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.2 ClusterName:no-preload-572602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:59:58.611385   61500 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 00:59:58.611439   61500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:59:58.655244   61500 cri.go:89] found id: ""
	I0416 00:59:58.655315   61500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 00:59:58.667067   61500 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 00:59:58.667082   61500 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 00:59:58.667088   61500 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 00:59:58.667128   61500 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 00:59:58.678615   61500 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:59:58.680097   61500 kubeconfig.go:125] found "no-preload-572602" server: "https://192.168.39.121:8443"
	I0416 00:59:58.683135   61500 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 00:59:58.695291   61500 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.121
	I0416 00:59:58.695323   61500 kubeadm.go:1154] stopping kube-system containers ...
	I0416 00:59:58.695337   61500 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 00:59:58.695380   61500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:59:58.731743   61500 cri.go:89] found id: ""
	I0416 00:59:58.731832   61500 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 00:59:58.748125   61500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 00:59:58.757845   61500 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 00:59:58.757865   61500 kubeadm.go:156] found existing configuration files:
	
	I0416 00:59:58.757918   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 00:59:58.766993   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 00:59:58.767036   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 00:59:58.776831   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 00:59:58.786420   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 00:59:58.786467   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 00:59:58.796067   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 00:59:58.805385   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 00:59:58.805511   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 00:59:58.815313   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 00:59:58.826551   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 00:59:58.826603   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 00:59:58.836652   61500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 00:59:58.848671   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 00:59:58.967511   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.416009   61500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.44846758s)
	I0416 01:00:00.416041   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.657784   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.741694   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.876550   61500 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:00.876630   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:01.377586   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:01.877647   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:01.950167   61500 api_server.go:72] duration metric: took 1.073614574s to wait for apiserver process to appear ...
	I0416 01:00:01.950201   61500 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:00:01.950224   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:01.950854   61500 api_server.go:269] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
	I0416 01:00:02.450437   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 00:59:59.421878   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Start
	I0416 00:59:59.422036   62747 main.go:141] libmachine: (embed-certs-617092) Ensuring networks are active...
	I0416 00:59:59.422646   62747 main.go:141] libmachine: (embed-certs-617092) Ensuring network default is active
	I0416 00:59:59.422931   62747 main.go:141] libmachine: (embed-certs-617092) Ensuring network mk-embed-certs-617092 is active
	I0416 00:59:59.423360   62747 main.go:141] libmachine: (embed-certs-617092) Getting domain xml...
	I0416 00:59:59.424005   62747 main.go:141] libmachine: (embed-certs-617092) Creating domain...
	I0416 01:00:00.682582   62747 main.go:141] libmachine: (embed-certs-617092) Waiting to get IP...
	I0416 01:00:00.683684   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:00.684222   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:00.684277   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:00.684198   63257 retry.go:31] will retry after 196.582767ms: waiting for machine to come up
	I0416 01:00:00.882954   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:00.883544   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:00.883577   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:00.883482   63257 retry.go:31] will retry after 309.274692ms: waiting for machine to come up
	I0416 01:00:01.193848   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:01.194286   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:01.194325   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:01.194234   63257 retry.go:31] will retry after 379.332728ms: waiting for machine to come up
	I0416 01:00:01.574938   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:01.575371   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:01.575400   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:01.575318   63257 retry.go:31] will retry after 445.10423ms: waiting for machine to come up
	I0416 01:00:02.022081   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:02.022612   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:02.022636   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:02.022570   63257 retry.go:31] will retry after 692.025501ms: waiting for machine to come up
	I0416 01:00:02.716548   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:02.717032   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:02.717061   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:02.716992   63257 retry.go:31] will retry after 735.44304ms: waiting for machine to come up
	I0416 01:00:02.891638   62139 crio.go:462] duration metric: took 1.823700483s to copy over tarball
	I0416 01:00:02.891723   62139 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:00:06.137253   62139 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.245498092s)
	I0416 01:00:06.137283   62139 crio.go:469] duration metric: took 3.245614896s to extract the tarball
	I0416 01:00:06.137292   62139 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:00:06.181260   62139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:06.224646   62139 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 01:00:06.224682   62139 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 01:00:06.224762   62139 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:06.224815   62139 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.224851   62139 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.224821   62139 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.224768   62139 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.224797   62139 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.225121   62139 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0416 01:00:06.224797   62139 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.226485   62139 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.226505   62139 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0416 01:00:06.226516   62139 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.226580   62139 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.226729   62139 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.227296   62139 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:06.227311   62139 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.227315   62139 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.397101   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.431142   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0416 01:00:06.433152   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.433876   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.434844   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.441478   62139 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0416 01:00:06.441524   62139 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.441558   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.450391   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.506375   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.540080   62139 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0416 01:00:06.540250   62139 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0416 01:00:06.540121   62139 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0416 01:00:06.540299   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.540305   62139 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.540343   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613287   62139 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0416 01:00:06.613305   62139 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0416 01:00:06.613334   62139 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.613339   62139 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.613381   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613381   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613490   62139 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0416 01:00:06.613522   62139 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.613569   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613384   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.613620   62139 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0416 01:00:06.613657   62139 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.613716   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0416 01:00:06.613722   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613665   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.619153   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.638065   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.734018   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0416 01:00:06.734134   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.749273   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0416 01:00:06.750536   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0416 01:00:06.750576   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.750655   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0416 01:00:06.750594   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0416 01:00:06.790321   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0416 01:00:06.803564   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0416 01:00:07.060494   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:05.541219   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:05.541261   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:05.541279   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:05.585252   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:05.585284   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:05.950871   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:05.970682   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:05.970725   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:06.450780   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:06.457855   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:06.457888   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:06.950519   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:06.955476   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:06.955505   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:07.451155   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:07.463138   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:07.463172   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:03.453566   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:03.454098   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:03.454131   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:03.454033   63257 retry.go:31] will retry after 838.732671ms: waiting for machine to come up
	I0416 01:00:04.294692   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:04.295209   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:04.295237   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:04.295158   63257 retry.go:31] will retry after 1.302969512s: waiting for machine to come up
	I0416 01:00:05.599886   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:05.600406   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:05.600435   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:05.600378   63257 retry.go:31] will retry after 1.199501225s: waiting for machine to come up
	I0416 01:00:06.801741   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:06.802134   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:06.802153   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:06.802107   63257 retry.go:31] will retry after 1.631018672s: waiting for machine to come up
	I0416 01:00:07.951263   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:07.961911   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:07.961946   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:08.450413   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:08.458651   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:08.458683   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:08.950297   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:08.955847   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 200:
	ok
	I0416 01:00:08.964393   61500 api_server.go:141] control plane version: v1.30.0-rc.2
	I0416 01:00:08.964422   61500 api_server.go:131] duration metric: took 7.01421218s to wait for apiserver health ...
	I0416 01:00:08.964432   61500 cni.go:84] Creating CNI manager for ""
	I0416 01:00:08.964445   61500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:08.966249   61500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:00:07.207951   62139 cache_images.go:92] duration metric: took 983.249797ms to LoadCachedImages
	W0416 01:00:07.286619   62139 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0416 01:00:07.286654   62139 kubeadm.go:928] updating node { 192.168.83.98 8443 v1.20.0 crio true true} ...
	I0416 01:00:07.286815   62139 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-800769 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.98
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 01:00:07.286916   62139 ssh_runner.go:195] Run: crio config
	I0416 01:00:07.338016   62139 cni.go:84] Creating CNI manager for ""
	I0416 01:00:07.338038   62139 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:07.338049   62139 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:00:07.338072   62139 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.98 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-800769 NodeName:old-k8s-version-800769 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.98"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.98 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0416 01:00:07.338207   62139 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.98
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-800769"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.98
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.98"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:00:07.338273   62139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0416 01:00:07.349347   62139 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:00:07.349432   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:00:07.361389   62139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0416 01:00:07.379714   62139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:00:07.397953   62139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0416 01:00:07.416901   62139 ssh_runner.go:195] Run: grep 192.168.83.98	control-plane.minikube.internal$ /etc/hosts
	I0416 01:00:07.420904   62139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.98	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:07.436685   62139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:07.567945   62139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:00:07.587829   62139 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769 for IP: 192.168.83.98
	I0416 01:00:07.587858   62139 certs.go:194] generating shared ca certs ...
	I0416 01:00:07.587880   62139 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:07.588087   62139 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:00:07.588155   62139 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:00:07.588171   62139 certs.go:256] generating profile certs ...
	I0416 01:00:07.606683   62139 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.key
	I0416 01:00:07.606823   62139 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key.efc35655
	I0416 01:00:07.606872   62139 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.key
	I0416 01:00:07.607040   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:00:07.607087   62139 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:00:07.607114   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:00:07.607172   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:00:07.607204   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:00:07.607234   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:00:07.607283   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:07.608127   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:00:07.658868   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:00:07.703378   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:00:07.743203   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:00:07.787335   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0416 01:00:07.823630   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 01:00:07.854198   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:00:07.881813   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 01:00:07.909698   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:00:07.935341   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:00:07.963102   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:00:07.989657   62139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:00:08.009203   62139 ssh_runner.go:195] Run: openssl version
	I0416 01:00:08.015677   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:00:08.027077   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.032096   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.032179   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.038672   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:00:08.054256   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:00:08.065287   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.069846   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.069907   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.075899   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:00:08.087272   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:00:08.098494   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.103168   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.103246   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.109202   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:00:08.120143   62139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:00:08.125027   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 01:00:08.131716   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 01:00:08.138024   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 01:00:08.144291   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 01:00:08.150741   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 01:00:08.156931   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 01:00:08.163147   62139 kubeadm.go:391] StartCluster: {Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.98 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:00:08.163254   62139 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:00:08.163298   62139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:08.201923   62139 cri.go:89] found id: ""
	I0416 01:00:08.202000   62139 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 01:00:08.212441   62139 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 01:00:08.212462   62139 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 01:00:08.212467   62139 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 01:00:08.212514   62139 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 01:00:08.222702   62139 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 01:00:08.223670   62139 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-800769" does not appear in /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:00:08.224332   62139 kubeconfig.go:62] /home/jenkins/minikube-integration/18647-7542/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-800769" cluster setting kubeconfig missing "old-k8s-version-800769" context setting]
	I0416 01:00:08.225340   62139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:08.343775   62139 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 01:00:08.355942   62139 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.98
	I0416 01:00:08.355986   62139 kubeadm.go:1154] stopping kube-system containers ...
	I0416 01:00:08.356007   62139 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 01:00:08.356081   62139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:08.398894   62139 cri.go:89] found id: ""
	I0416 01:00:08.398976   62139 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 01:00:08.416343   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:00:08.426901   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:00:08.426926   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:00:08.426981   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:00:08.437870   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:00:08.437942   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:00:08.452256   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:00:08.466375   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:00:08.466447   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:00:08.477246   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:00:08.487547   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:00:08.487615   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:00:08.504171   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:00:08.515265   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:00:08.515332   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:00:08.525186   62139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:00:08.535381   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:08.657456   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.504421   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.781478   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.950913   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:10.044772   62139 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:10.044871   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:10.545002   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:11.045664   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:11.545083   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:12.045593   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:08.967643   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:00:08.986743   61500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:00:09.011229   61500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:00:09.022810   61500 system_pods.go:59] 8 kube-system pods found
	I0416 01:00:09.022858   61500 system_pods.go:61] "coredns-7db6d8ff4d-xxlkb" [b1ec79ef-e16c-4feb-94ec-5dc85645867f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:00:09.022869   61500 system_pods.go:61] "etcd-no-preload-572602" [f29f3efe-bee4-4d8c-9d49-68008ad50a9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 01:00:09.022881   61500 system_pods.go:61] "kube-apiserver-no-preload-572602" [dd740f94-bfd5-4043-9522-5b8a932690cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 01:00:09.022893   61500 system_pods.go:61] "kube-controller-manager-no-preload-572602" [2778e1a7-a7e3-4ad6-a265-552e78b6b195] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 01:00:09.022901   61500 system_pods.go:61] "kube-proxy-v9fmp" [70ab6236-c758-48eb-85a7-8f7721730a20] Running
	I0416 01:00:09.022908   61500 system_pods.go:61] "kube-scheduler-no-preload-572602" [bb8650bb-657e-49f1-9cee-4437879be44d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 01:00:09.022919   61500 system_pods.go:61] "metrics-server-569cc877fc-llsfr" [ad421803-6236-44df-a15d-c890a3a10dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:00:09.022925   61500 system_pods.go:61] "storage-provisioner" [ec2dd6e2-33db-4888-8945-9879821c92fc] Running
	I0416 01:00:09.022934   61500 system_pods.go:74] duration metric: took 11.661356ms to wait for pod list to return data ...
	I0416 01:00:09.022950   61500 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:00:09.027411   61500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:00:09.027445   61500 node_conditions.go:123] node cpu capacity is 2
	I0416 01:00:09.027459   61500 node_conditions.go:105] duration metric: took 4.503043ms to run NodePressure ...
	I0416 01:00:09.027480   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.307796   61500 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 01:00:09.313534   61500 kubeadm.go:733] kubelet initialised
	I0416 01:00:09.313567   61500 kubeadm.go:734] duration metric: took 5.734401ms waiting for restarted kubelet to initialise ...
	I0416 01:00:09.313580   61500 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:00:09.320900   61500 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.327569   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.327606   61500 pod_ready.go:81] duration metric: took 6.67541ms for pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.327621   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.327633   61500 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.333714   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "etcd-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.333746   61500 pod_ready.go:81] duration metric: took 6.094825ms for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.333759   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "etcd-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.333768   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.338980   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "kube-apiserver-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.339006   61500 pod_ready.go:81] duration metric: took 5.230122ms for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.339017   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "kube-apiserver-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.339033   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.415418   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.415450   61500 pod_ready.go:81] duration metric: took 76.40508ms for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.415462   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.415470   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v9fmp" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.815907   61500 pod_ready.go:92] pod "kube-proxy-v9fmp" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:09.815945   61500 pod_ready.go:81] duration metric: took 400.462786ms for pod "kube-proxy-v9fmp" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.815959   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:11.824269   61500 pod_ready.go:102] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:08.434523   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:08.435039   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:08.435067   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:08.434988   63257 retry.go:31] will retry after 2.819136125s: waiting for machine to come up
	I0416 01:00:11.256238   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:11.256704   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:11.256722   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:11.256664   63257 retry.go:31] will retry after 3.074881299s: waiting for machine to come up
	I0416 01:00:12.545696   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:13.045935   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:13.545810   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.045682   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.545524   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:15.045110   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:15.545792   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:16.045843   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:16.545684   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:17.045401   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.322436   61500 pod_ready.go:102] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:16.821648   61500 pod_ready.go:102] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:14.335004   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:14.335391   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:14.335437   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:14.335343   63257 retry.go:31] will retry after 4.248377683s: waiting for machine to come up
	I0416 01:00:20.014452   61267 start.go:364] duration metric: took 53.932663013s to acquireMachinesLock for "default-k8s-diff-port-653942"
	I0416 01:00:20.014507   61267 start.go:96] Skipping create...Using existing machine configuration
	I0416 01:00:20.014515   61267 fix.go:54] fixHost starting: 
	I0416 01:00:20.014929   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:00:20.014964   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:00:20.033099   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0416 01:00:20.033554   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:00:20.034077   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:00:20.034104   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:00:20.034458   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:00:20.034665   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:20.034812   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:00:20.036559   61267 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653942: state=Stopped err=<nil>
	I0416 01:00:20.036588   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	W0416 01:00:20.036751   61267 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 01:00:20.038774   61267 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-653942" ...
	I0416 01:00:18.588875   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.589320   62747 main.go:141] libmachine: (embed-certs-617092) Found IP for machine: 192.168.61.225
	I0416 01:00:18.589347   62747 main.go:141] libmachine: (embed-certs-617092) Reserving static IP address...
	I0416 01:00:18.589362   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has current primary IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.589699   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "embed-certs-617092", mac: "52:54:00:86:1b:62", ip: "192.168.61.225"} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.589728   62747 main.go:141] libmachine: (embed-certs-617092) Reserved static IP address: 192.168.61.225
	I0416 01:00:18.589752   62747 main.go:141] libmachine: (embed-certs-617092) DBG | skip adding static IP to network mk-embed-certs-617092 - found existing host DHCP lease matching {name: "embed-certs-617092", mac: "52:54:00:86:1b:62", ip: "192.168.61.225"}
	I0416 01:00:18.589771   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Getting to WaitForSSH function...
	I0416 01:00:18.589808   62747 main.go:141] libmachine: (embed-certs-617092) Waiting for SSH to be available...
	I0416 01:00:18.591590   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.591858   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.591885   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.591995   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Using SSH client type: external
	I0416 01:00:18.592027   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa (-rw-------)
	I0416 01:00:18.592058   62747 main.go:141] libmachine: (embed-certs-617092) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 01:00:18.592072   62747 main.go:141] libmachine: (embed-certs-617092) DBG | About to run SSH command:
	I0416 01:00:18.592084   62747 main.go:141] libmachine: (embed-certs-617092) DBG | exit 0
	I0416 01:00:18.717336   62747 main.go:141] libmachine: (embed-certs-617092) DBG | SSH cmd err, output: <nil>: 
	I0416 01:00:18.717759   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetConfigRaw
	I0416 01:00:18.718347   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:18.720640   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.721040   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.721086   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.721300   62747 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/config.json ...
	I0416 01:00:18.721481   62747 machine.go:94] provisionDockerMachine start ...
	I0416 01:00:18.721501   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:18.721700   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:18.723610   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.723924   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.723946   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.724126   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:18.724345   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.724512   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.724616   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:18.724737   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:18.725049   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:18.725199   62747 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 01:00:18.834014   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 01:00:18.834041   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetMachineName
	I0416 01:00:18.834257   62747 buildroot.go:166] provisioning hostname "embed-certs-617092"
	I0416 01:00:18.834280   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetMachineName
	I0416 01:00:18.834495   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:18.836959   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.837282   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.837333   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.837417   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:18.837588   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.837755   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.837962   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:18.838152   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:18.838324   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:18.838342   62747 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-617092 && echo "embed-certs-617092" | sudo tee /etc/hostname
	I0416 01:00:18.959828   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-617092
	
	I0416 01:00:18.959865   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:18.962661   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.962997   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.963029   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.963174   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:18.963351   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.963488   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.963609   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:18.963747   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:18.963949   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:18.963967   62747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-617092' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-617092/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-617092' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 01:00:19.079309   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 01:00:19.079341   62747 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 01:00:19.079400   62747 buildroot.go:174] setting up certificates
	I0416 01:00:19.079409   62747 provision.go:84] configureAuth start
	I0416 01:00:19.079423   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetMachineName
	I0416 01:00:19.079723   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:19.082430   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.082809   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.082838   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.082994   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.085476   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.085802   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.085825   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.085952   62747 provision.go:143] copyHostCerts
	I0416 01:00:19.086006   62747 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 01:00:19.086022   62747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 01:00:19.086077   62747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 01:00:19.086165   62747 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 01:00:19.086174   62747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 01:00:19.086193   62747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 01:00:19.086244   62747 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 01:00:19.086251   62747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 01:00:19.086270   62747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 01:00:19.086336   62747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.embed-certs-617092 san=[127.0.0.1 192.168.61.225 embed-certs-617092 localhost minikube]
	I0416 01:00:19.330622   62747 provision.go:177] copyRemoteCerts
	I0416 01:00:19.330687   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 01:00:19.330712   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.333264   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.333618   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.333645   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.333798   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.333979   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.334122   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.334235   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:19.415820   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0416 01:00:19.442985   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 01:00:19.468427   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 01:00:19.496640   62747 provision.go:87] duration metric: took 417.215523ms to configureAuth
	I0416 01:00:19.496676   62747 buildroot.go:189] setting minikube options for container-runtime
	I0416 01:00:19.496857   62747 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:00:19.496929   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.499561   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.499933   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.499981   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.500132   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.500352   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.500529   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.500671   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.500823   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:19.501026   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:19.501046   62747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 01:00:19.775400   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 01:00:19.775434   62747 machine.go:97] duration metric: took 1.053938445s to provisionDockerMachine
	I0416 01:00:19.775448   62747 start.go:293] postStartSetup for "embed-certs-617092" (driver="kvm2")
	I0416 01:00:19.775462   62747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 01:00:19.775484   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:19.775853   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 01:00:19.775886   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.778961   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.779327   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.779356   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.779510   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.779723   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.779883   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.780008   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:19.865236   62747 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 01:00:19.869769   62747 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 01:00:19.869800   62747 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 01:00:19.869865   62747 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 01:00:19.870010   62747 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 01:00:19.870111   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 01:00:19.880477   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:19.905555   62747 start.go:296] duration metric: took 130.091868ms for postStartSetup
	I0416 01:00:19.905603   62747 fix.go:56] duration metric: took 20.511199999s for fixHost
	I0416 01:00:19.905629   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.908252   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.908593   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.908631   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.908770   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.908972   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.909129   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.909284   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.909448   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:19.909607   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:19.909622   62747 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 01:00:20.014222   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229219.981820926
	
	I0416 01:00:20.014251   62747 fix.go:216] guest clock: 1713229219.981820926
	I0416 01:00:20.014262   62747 fix.go:229] Guest: 2024-04-16 01:00:19.981820926 +0000 UTC Remote: 2024-04-16 01:00:19.90560817 +0000 UTC m=+97.152894999 (delta=76.212756ms)
	I0416 01:00:20.014331   62747 fix.go:200] guest clock delta is within tolerance: 76.212756ms
	I0416 01:00:20.014339   62747 start.go:83] releasing machines lock for "embed-certs-617092", held for 20.619971021s
	I0416 01:00:20.014377   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.014676   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:20.017771   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.018204   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:20.018236   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.018446   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.018991   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.019172   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.019260   62747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 01:00:20.019299   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:20.019439   62747 ssh_runner.go:195] Run: cat /version.json
	I0416 01:00:20.019466   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:20.022283   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.022554   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.022664   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:20.022688   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.022897   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:20.023088   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:20.023150   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:20.023177   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.023281   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:20.023431   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:20.023431   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:20.023791   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:20.023942   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:20.024084   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:20.138251   62747 ssh_runner.go:195] Run: systemctl --version
	I0416 01:00:20.145100   62747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 01:00:20.299049   62747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 01:00:20.307080   62747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 01:00:20.307177   62747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 01:00:20.326056   62747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 01:00:20.326085   62747 start.go:494] detecting cgroup driver to use...
	I0416 01:00:20.326166   62747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 01:00:20.343297   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 01:00:20.358136   62747 docker.go:217] disabling cri-docker service (if available) ...
	I0416 01:00:20.358201   62747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 01:00:20.372936   62747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 01:00:20.387473   62747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 01:00:20.515721   62747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:00:20.680319   62747 docker.go:233] disabling docker service ...
	I0416 01:00:20.680413   62747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:00:20.700816   62747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:00:20.724097   62747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:00:20.885812   62747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:00:21.037890   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:00:21.055670   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:00:21.078466   62747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 01:00:21.078533   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.090135   62747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:00:21.090200   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.106122   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.123844   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.134923   62747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:00:21.153565   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.164751   62747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.184880   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.197711   62747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:00:21.208615   62747 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:00:21.208669   62747 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:00:21.223906   62747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:00:21.234873   62747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:21.405921   62747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:00:21.564833   62747 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:00:21.564918   62747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:00:21.570592   62747 start.go:562] Will wait 60s for crictl version
	I0416 01:00:21.570660   62747 ssh_runner.go:195] Run: which crictl
	I0416 01:00:21.575339   62747 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:00:21.617252   62747 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:00:21.617348   62747 ssh_runner.go:195] Run: crio --version
	I0416 01:00:21.648662   62747 ssh_runner.go:195] Run: crio --version
	I0416 01:00:21.683775   62747 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 01:00:17.544937   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:18.045282   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:18.545707   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:19.045821   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:19.545868   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.045069   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.545134   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:21.045607   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:21.545366   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:22.044998   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.040137   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Start
	I0416 01:00:20.040355   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Ensuring networks are active...
	I0416 01:00:20.041103   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Ensuring network default is active
	I0416 01:00:20.041469   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Ensuring network mk-default-k8s-diff-port-653942 is active
	I0416 01:00:20.041869   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Getting domain xml...
	I0416 01:00:20.042474   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Creating domain...
	I0416 01:00:21.359375   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting to get IP...
	I0416 01:00:21.360333   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.360736   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.360807   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:21.360726   63461 retry.go:31] will retry after 290.970715ms: waiting for machine to come up
	I0416 01:00:21.653420   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.653883   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.653916   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:21.653841   63461 retry.go:31] will retry after 361.304618ms: waiting for machine to come up
	I0416 01:00:22.016540   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.017038   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.017071   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:22.016976   63461 retry.go:31] will retry after 411.249327ms: waiting for machine to come up
	I0416 01:00:18.322778   61500 pod_ready.go:92] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:18.322799   61500 pod_ready.go:81] duration metric: took 8.506833323s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:18.322808   61500 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:20.328344   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:22.331157   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:21.685033   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:21.688407   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:21.688774   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:21.688809   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:21.689010   62747 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0416 01:00:21.693612   62747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:21.707524   62747 kubeadm.go:877] updating cluster {Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:00:21.707657   62747 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 01:00:21.707699   62747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:21.748697   62747 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 01:00:21.748785   62747 ssh_runner.go:195] Run: which lz4
	I0416 01:00:21.753521   62747 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:00:21.758125   62747 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:00:21.758158   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 01:00:22.545403   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:23.045303   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:23.544984   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:24.045882   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:24.545194   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:25.045010   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:25.545278   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:26.045702   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:26.545233   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:27.045814   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:22.429595   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.430124   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.430159   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:22.430087   63461 retry.go:31] will retry after 495.681984ms: waiting for machine to come up
	I0416 01:00:22.927476   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.927932   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.927959   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:22.927875   63461 retry.go:31] will retry after 506.264557ms: waiting for machine to come up
	I0416 01:00:23.435290   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:23.435742   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:23.435773   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:23.435689   63461 retry.go:31] will retry after 826.359716ms: waiting for machine to come up
	I0416 01:00:24.263672   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:24.264151   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:24.264183   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:24.264107   63461 retry.go:31] will retry after 873.35176ms: waiting for machine to come up
	I0416 01:00:25.138864   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:25.139318   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:25.139340   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:25.139308   63461 retry.go:31] will retry after 1.129546887s: waiting for machine to come up
	I0416 01:00:26.270364   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:26.270968   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:26.271000   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:26.270902   63461 retry.go:31] will retry after 1.441466368s: waiting for machine to come up
	I0416 01:00:24.830562   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:26.832057   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:23.353811   62747 crio.go:462] duration metric: took 1.600325005s to copy over tarball
	I0416 01:00:23.353885   62747 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:00:25.815443   62747 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.46152973s)
	I0416 01:00:25.815479   62747 crio.go:469] duration metric: took 2.461639439s to extract the tarball
	I0416 01:00:25.815489   62747 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:00:25.862653   62747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:25.914416   62747 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 01:00:25.914444   62747 cache_images.go:84] Images are preloaded, skipping loading
	I0416 01:00:25.914454   62747 kubeadm.go:928] updating node { 192.168.61.225 8443 v1.29.3 crio true true} ...
	I0416 01:00:25.914586   62747 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-617092 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 01:00:25.914680   62747 ssh_runner.go:195] Run: crio config
	I0416 01:00:25.970736   62747 cni.go:84] Creating CNI manager for ""
	I0416 01:00:25.970760   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:25.970773   62747 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:00:25.970796   62747 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.225 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-617092 NodeName:embed-certs-617092 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 01:00:25.970949   62747 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-617092"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:00:25.971022   62747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 01:00:25.985111   62747 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:00:25.985198   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:00:25.996306   62747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0416 01:00:26.013401   62747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:00:26.030094   62747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0416 01:00:26.048252   62747 ssh_runner.go:195] Run: grep 192.168.61.225	control-plane.minikube.internal$ /etc/hosts
	I0416 01:00:26.052717   62747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:26.069538   62747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:26.205867   62747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:00:26.224210   62747 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092 for IP: 192.168.61.225
	I0416 01:00:26.224237   62747 certs.go:194] generating shared ca certs ...
	I0416 01:00:26.224259   62747 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:26.224459   62747 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:00:26.224520   62747 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:00:26.224532   62747 certs.go:256] generating profile certs ...
	I0416 01:00:26.224646   62747 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/client.key
	I0416 01:00:26.224723   62747 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/apiserver.key.383097d4
	I0416 01:00:26.224773   62747 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/proxy-client.key
	I0416 01:00:26.224932   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:00:26.224973   62747 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:00:26.224982   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:00:26.225014   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:00:26.225050   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:00:26.225085   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:00:26.225126   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:26.225872   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:00:26.282272   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:00:26.329827   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:00:26.366744   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:00:26.405845   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0416 01:00:26.440535   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 01:00:26.465371   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:00:26.491633   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 01:00:26.518682   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:00:26.543992   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:00:26.573728   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:00:26.602308   62747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:00:26.622491   62747 ssh_runner.go:195] Run: openssl version
	I0416 01:00:26.628805   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:00:26.643163   62747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:26.648292   62747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:26.648351   62747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:26.654890   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:00:26.668501   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:00:26.682038   62747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:00:26.687327   62747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:00:26.687388   62747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:00:26.693557   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:00:26.706161   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:00:26.718432   62747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:00:26.722989   62747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:00:26.723050   62747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:00:26.729311   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:00:26.744138   62747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:00:26.749490   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 01:00:26.756478   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 01:00:26.763326   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 01:00:26.770194   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 01:00:26.776641   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 01:00:26.783022   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 01:00:26.789543   62747 kubeadm.go:391] StartCluster: {Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:00:26.789654   62747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:00:26.789717   62747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:26.831148   62747 cri.go:89] found id: ""
	I0416 01:00:26.831219   62747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 01:00:26.844372   62747 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 01:00:26.844398   62747 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 01:00:26.844403   62747 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 01:00:26.844454   62747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 01:00:26.858173   62747 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 01:00:26.859210   62747 kubeconfig.go:125] found "embed-certs-617092" server: "https://192.168.61.225:8443"
	I0416 01:00:26.861233   62747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 01:00:26.874068   62747 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.225
	I0416 01:00:26.874105   62747 kubeadm.go:1154] stopping kube-system containers ...
	I0416 01:00:26.874119   62747 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 01:00:26.874177   62747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:26.926456   62747 cri.go:89] found id: ""
	I0416 01:00:26.926537   62747 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 01:00:26.945874   62747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:00:26.960207   62747 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:00:26.960229   62747 kubeadm.go:156] found existing configuration files:
	
	I0416 01:00:26.960282   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:00:26.971895   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:00:26.971958   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:00:26.982956   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:00:26.993935   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:00:26.994000   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:00:27.005216   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:00:27.015624   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:00:27.015680   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:00:27.026513   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:00:27.037062   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:00:27.037118   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:00:27.048173   62747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:00:27.061987   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:27.190243   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:27.545025   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:28.045752   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:28.545833   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.045264   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.545316   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:30.045594   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:30.545046   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:31.045139   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:31.545251   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:32.045710   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:27.714372   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:27.714822   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:27.714854   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:27.714767   63461 retry.go:31] will retry after 1.810511131s: waiting for machine to come up
	I0416 01:00:29.527497   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:29.528041   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:29.528072   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:29.527983   63461 retry.go:31] will retry after 2.163921338s: waiting for machine to come up
	I0416 01:00:31.694203   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:31.694741   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:31.694769   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:31.694714   63461 retry.go:31] will retry after 2.245150923s: waiting for machine to come up
	I0416 01:00:29.332159   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:31.332218   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:28.252295   62747 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.062013928s)
	I0416 01:00:28.252331   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:28.468110   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:28.553370   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:28.676185   62747 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:28.676273   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.176826   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.676498   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.702138   62747 api_server.go:72] duration metric: took 1.025950998s to wait for apiserver process to appear ...
	I0416 01:00:29.702170   62747 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:00:29.702192   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:29.702822   62747 api_server.go:269] stopped: https://192.168.61.225:8443/healthz: Get "https://192.168.61.225:8443/healthz": dial tcp 192.168.61.225:8443: connect: connection refused
	I0416 01:00:30.203298   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:32.951714   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:32.951754   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:32.951779   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:33.003631   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:33.003672   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:33.202825   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:33.208168   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:33.208201   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:33.702532   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:33.712501   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:33.712542   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:34.203157   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:34.210567   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:34.210597   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:34.702568   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:34.711690   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 200:
	ok
	I0416 01:00:34.723252   62747 api_server.go:141] control plane version: v1.29.3
	I0416 01:00:34.723279   62747 api_server.go:131] duration metric: took 5.021102658s to wait for apiserver health ...
	I0416 01:00:34.723287   62747 cni.go:84] Creating CNI manager for ""
	I0416 01:00:34.723293   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:34.724989   62747 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:00:32.545963   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.045020   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.545657   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:34.045706   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:34.544972   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:35.045252   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:35.545087   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:36.045080   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:36.545787   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:37.045046   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.942412   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:33.942923   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:33.942952   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:33.942870   63461 retry.go:31] will retry after 3.750613392s: waiting for machine to come up
	I0416 01:00:33.829307   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:35.830613   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:34.726400   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:00:34.746294   62747 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:00:34.767028   62747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:00:34.778610   62747 system_pods.go:59] 8 kube-system pods found
	I0416 01:00:34.778653   62747 system_pods.go:61] "coredns-76f75df574-dxzhk" [a71b29ec-8602-47d6-825c-a1a54a1758d0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:00:34.778664   62747 system_pods.go:61] "etcd-embed-certs-617092" [8966501b-6a06-4e0b-acb6-77df5f53cd3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 01:00:34.778674   62747 system_pods.go:61] "kube-apiserver-embed-certs-617092" [7ad29687-3964-4a5b-8939-bcf3dc71d578] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 01:00:34.778685   62747 system_pods.go:61] "kube-controller-manager-embed-certs-617092" [78b21361-f302-43f3-8356-ea15fad4edb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 01:00:34.778695   62747 system_pods.go:61] "kube-proxy-xtdf4" [4e8fe1da-9a02-428e-94f1-595f2e9170e0] Running
	I0416 01:00:34.778703   62747 system_pods.go:61] "kube-scheduler-embed-certs-617092" [c03d87b4-26d3-4bff-8f53-8844260f1ed8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 01:00:34.778720   62747 system_pods.go:61] "metrics-server-57f55c9bc5-knnvn" [4607d12d-25db-4637-be17-e2665970c0a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:00:34.778729   62747 system_pods.go:61] "storage-provisioner" [41362b6c-fde7-45fa-b6cf-1d7acef3d4ce] Running
	I0416 01:00:34.778741   62747 system_pods.go:74] duration metric: took 11.690083ms to wait for pod list to return data ...
	I0416 01:00:34.778755   62747 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:00:34.782283   62747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:00:34.782319   62747 node_conditions.go:123] node cpu capacity is 2
	I0416 01:00:34.782329   62747 node_conditions.go:105] duration metric: took 3.566074ms to run NodePressure ...
	I0416 01:00:34.782344   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:35.056194   62747 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 01:00:35.068546   62747 kubeadm.go:733] kubelet initialised
	I0416 01:00:35.068571   62747 kubeadm.go:734] duration metric: took 12.345347ms waiting for restarted kubelet to initialise ...
	I0416 01:00:35.068581   62747 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:00:35.075013   62747 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-dxzhk" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:37.081976   62747 pod_ready.go:102] pod "coredns-76f75df574-dxzhk" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:37.697323   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.697830   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has current primary IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.697857   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Found IP for machine: 192.168.50.216
	I0416 01:00:37.697873   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Reserving static IP address...
	I0416 01:00:37.698323   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Reserved static IP address: 192.168.50.216
	I0416 01:00:37.698345   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for SSH to be available...
	I0416 01:00:37.698372   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-653942", mac: "52:54:00:4b:a2:47", ip: "192.168.50.216"} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.698418   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | skip adding static IP to network mk-default-k8s-diff-port-653942 - found existing host DHCP lease matching {name: "default-k8s-diff-port-653942", mac: "52:54:00:4b:a2:47", ip: "192.168.50.216"}
	I0416 01:00:37.698450   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Getting to WaitForSSH function...
	I0416 01:00:37.700942   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.701312   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.701346   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.701520   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Using SSH client type: external
	I0416 01:00:37.701567   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa (-rw-------)
	I0416 01:00:37.701621   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 01:00:37.701676   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | About to run SSH command:
	I0416 01:00:37.701712   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | exit 0
	I0416 01:00:37.829860   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | SSH cmd err, output: <nil>: 
	I0416 01:00:37.830254   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetConfigRaw
	I0416 01:00:37.830931   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:37.833361   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.833755   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.833788   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.834026   61267 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/config.json ...
	I0416 01:00:37.834198   61267 machine.go:94] provisionDockerMachine start ...
	I0416 01:00:37.834214   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:37.834426   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:37.836809   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.837221   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.837251   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.837377   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:37.837588   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.837737   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.837869   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:37.838023   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:37.838208   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:37.838219   61267 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 01:00:37.950999   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 01:00:37.951031   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 01:00:37.951271   61267 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653942"
	I0416 01:00:37.951303   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 01:00:37.951483   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:37.954395   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.954730   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.954755   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.954949   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:37.955165   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.955344   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.955549   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:37.955756   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:37.955980   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:37.956001   61267 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-653942 && echo "default-k8s-diff-port-653942" | sudo tee /etc/hostname
	I0416 01:00:38.085650   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-653942
	
	I0416 01:00:38.085682   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.088689   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.089031   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.089060   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.089297   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.089474   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.089623   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.089780   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.089948   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:38.090127   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:38.090146   61267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-653942' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-653942/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-653942' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 01:00:38.214653   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 01:00:38.214734   61267 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 01:00:38.214760   61267 buildroot.go:174] setting up certificates
	I0416 01:00:38.214773   61267 provision.go:84] configureAuth start
	I0416 01:00:38.214785   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 01:00:38.215043   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:38.217744   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.218145   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.218174   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.218336   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.220861   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.221187   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.221216   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.221343   61267 provision.go:143] copyHostCerts
	I0416 01:00:38.221405   61267 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 01:00:38.221426   61267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 01:00:38.221492   61267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 01:00:38.221638   61267 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 01:00:38.221649   61267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 01:00:38.221685   61267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 01:00:38.221777   61267 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 01:00:38.221787   61267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 01:00:38.221815   61267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 01:00:38.221887   61267 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-653942 san=[127.0.0.1 192.168.50.216 default-k8s-diff-port-653942 localhost minikube]
	I0416 01:00:38.266327   61267 provision.go:177] copyRemoteCerts
	I0416 01:00:38.266390   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 01:00:38.266422   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.269080   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.269546   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.269583   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.269901   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.270115   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.270259   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.270444   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:38.352861   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 01:00:38.380995   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0416 01:00:38.405746   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 01:00:38.431467   61267 provision.go:87] duration metric: took 216.680985ms to configureAuth
	I0416 01:00:38.431502   61267 buildroot.go:189] setting minikube options for container-runtime
	I0416 01:00:38.431674   61267 config.go:182] Loaded profile config "default-k8s-diff-port-653942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:00:38.431740   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.434444   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.434867   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.434909   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.435032   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.435245   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.435380   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.435568   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.435744   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:38.435948   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:38.435974   61267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 01:00:38.729392   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 01:00:38.729421   61267 machine.go:97] duration metric: took 895.211347ms to provisionDockerMachine
	I0416 01:00:38.729432   61267 start.go:293] postStartSetup for "default-k8s-diff-port-653942" (driver="kvm2")
	I0416 01:00:38.729442   61267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 01:00:38.729463   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.729802   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 01:00:38.729826   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.732755   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.733135   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.733181   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.733326   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.733490   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.733649   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.733784   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:38.819006   61267 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 01:00:38.823781   61267 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 01:00:38.823804   61267 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 01:00:38.823870   61267 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 01:00:38.823967   61267 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 01:00:38.824077   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 01:00:38.833958   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:38.859934   61267 start.go:296] duration metric: took 130.488205ms for postStartSetup
	I0416 01:00:38.859973   61267 fix.go:56] duration metric: took 18.845458863s for fixHost
	I0416 01:00:38.859992   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.862557   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.862889   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.862927   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.863016   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.863236   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.863426   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.863609   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.863786   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:38.863951   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:38.863961   61267 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 01:00:38.970405   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229238.936521840
	
	I0416 01:00:38.970431   61267 fix.go:216] guest clock: 1713229238.936521840
	I0416 01:00:38.970440   61267 fix.go:229] Guest: 2024-04-16 01:00:38.93652184 +0000 UTC Remote: 2024-04-16 01:00:38.859976379 +0000 UTC m=+356.490123424 (delta=76.545461ms)
	I0416 01:00:38.970489   61267 fix.go:200] guest clock delta is within tolerance: 76.545461ms
	I0416 01:00:38.970496   61267 start.go:83] releasing machines lock for "default-k8s-diff-port-653942", held for 18.956013216s
	I0416 01:00:38.970522   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.970806   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:38.973132   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.973440   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.973455   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.973646   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.974142   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.974332   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.974388   61267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 01:00:38.974432   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.974532   61267 ssh_runner.go:195] Run: cat /version.json
	I0416 01:00:38.974556   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.977284   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977459   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977624   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.977653   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977746   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.977774   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977800   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.978002   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.978017   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.978163   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.978169   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.978296   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:38.978314   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.978440   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:39.090827   61267 ssh_runner.go:195] Run: systemctl --version
	I0416 01:00:39.097716   61267 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 01:00:39.249324   61267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 01:00:39.256333   61267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 01:00:39.256402   61267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 01:00:39.272367   61267 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 01:00:39.272395   61267 start.go:494] detecting cgroup driver to use...
	I0416 01:00:39.272446   61267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 01:00:39.291713   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 01:00:39.305645   61267 docker.go:217] disabling cri-docker service (if available) ...
	I0416 01:00:39.305708   61267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 01:00:39.320731   61267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 01:00:39.336917   61267 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 01:00:39.450840   61267 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:00:39.596905   61267 docker.go:233] disabling docker service ...
	I0416 01:00:39.596972   61267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:00:39.612926   61267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:00:39.627583   61267 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:00:39.778135   61267 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:00:39.900216   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:00:39.914697   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:00:39.935875   61267 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 01:00:39.935930   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.946510   61267 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:00:39.946569   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.956794   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.966968   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.977207   61267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:00:39.988817   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:40.001088   61267 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:40.018950   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:40.030395   61267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:00:40.039956   61267 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:00:40.040013   61267 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:00:40.053877   61267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:00:40.065292   61267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:40.221527   61267 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:00:40.382800   61267 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:00:40.382880   61267 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:00:40.387842   61267 start.go:562] Will wait 60s for crictl version
	I0416 01:00:40.387897   61267 ssh_runner.go:195] Run: which crictl
	I0416 01:00:40.393774   61267 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:00:40.435784   61267 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:00:40.435864   61267 ssh_runner.go:195] Run: crio --version
	I0416 01:00:40.468702   61267 ssh_runner.go:195] Run: crio --version
	I0416 01:00:40.501355   61267 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 01:00:37.545192   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:38.045346   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:38.545599   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:39.045109   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:39.545360   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.045058   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.545745   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:41.045943   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:41.545900   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:42.045807   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.502716   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:40.505958   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:40.506353   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:40.506384   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:40.506597   61267 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0416 01:00:40.511238   61267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:40.525378   61267 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-653942 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-653942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.216 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:00:40.525519   61267 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 01:00:40.525586   61267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:40.570378   61267 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 01:00:40.570451   61267 ssh_runner.go:195] Run: which lz4
	I0416 01:00:40.575413   61267 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:00:40.580583   61267 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:00:40.580640   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 01:00:42.194745   61267 crio.go:462] duration metric: took 1.619375861s to copy over tarball
	I0416 01:00:42.194821   61267 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:00:37.830710   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:39.831822   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:42.330821   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:39.086761   62747 pod_ready.go:102] pod "coredns-76f75df574-dxzhk" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:40.082847   62747 pod_ready.go:92] pod "coredns-76f75df574-dxzhk" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:40.082868   62747 pod_ready.go:81] duration metric: took 5.007825454s for pod "coredns-76f75df574-dxzhk" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:40.082877   62747 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:42.092402   62747 pod_ready.go:92] pod "etcd-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:42.092425   62747 pod_ready.go:81] duration metric: took 2.009541778s for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:42.092438   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:42.545278   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:43.045894   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:43.545886   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.044964   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.544997   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:45.045340   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:45.545257   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:46.045108   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:46.544994   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:47.045987   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.671272   61267 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.476407392s)
	I0416 01:00:44.671304   61267 crio.go:469] duration metric: took 2.476532286s to extract the tarball
	I0416 01:00:44.671315   61267 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:00:44.709451   61267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:44.754382   61267 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 01:00:44.754412   61267 cache_images.go:84] Images are preloaded, skipping loading
	I0416 01:00:44.754424   61267 kubeadm.go:928] updating node { 192.168.50.216 8444 v1.29.3 crio true true} ...
	I0416 01:00:44.754543   61267 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-653942 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-653942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 01:00:44.754613   61267 ssh_runner.go:195] Run: crio config
	I0416 01:00:44.806896   61267 cni.go:84] Creating CNI manager for ""
	I0416 01:00:44.806918   61267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:44.806926   61267 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:00:44.806957   61267 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.216 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-653942 NodeName:default-k8s-diff-port-653942 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 01:00:44.807089   61267 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.216
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-653942"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.216
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.216"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:00:44.807144   61267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 01:00:44.821347   61267 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:00:44.821425   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:00:44.835415   61267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0416 01:00:44.855797   61267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:00:44.873694   61267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0416 01:00:44.892535   61267 ssh_runner.go:195] Run: grep 192.168.50.216	control-plane.minikube.internal$ /etc/hosts
	I0416 01:00:44.896538   61267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:44.909516   61267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:45.024588   61267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:00:45.055414   61267 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942 for IP: 192.168.50.216
	I0416 01:00:45.055440   61267 certs.go:194] generating shared ca certs ...
	I0416 01:00:45.055460   61267 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:45.055622   61267 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:00:45.055680   61267 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:00:45.055695   61267 certs.go:256] generating profile certs ...
	I0416 01:00:45.055815   61267 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.key
	I0416 01:00:45.055905   61267 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/apiserver.key.6620f6bf
	I0416 01:00:45.055975   61267 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/proxy-client.key
	I0416 01:00:45.056139   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:00:45.056185   61267 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:00:45.056195   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:00:45.056234   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:00:45.056268   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:00:45.056295   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:00:45.056355   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:45.057033   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:00:45.091704   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:00:45.154257   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:00:45.181077   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:00:45.222401   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0416 01:00:45.248568   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 01:00:45.277927   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:00:45.310417   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 01:00:45.341109   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:00:45.367056   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:00:45.395117   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:00:45.421921   61267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:00:45.440978   61267 ssh_runner.go:195] Run: openssl version
	I0416 01:00:45.447132   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:00:45.460008   61267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:00:45.464820   61267 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:00:45.464884   61267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:00:45.471232   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:00:45.482567   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:00:45.493541   61267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:45.498792   61267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:45.498849   61267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:45.505511   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:00:45.517533   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:00:45.529908   61267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:00:45.535120   61267 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:00:45.535181   61267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:00:45.541232   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:00:45.552946   61267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:00:45.559947   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 01:00:45.567567   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 01:00:45.575204   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 01:00:45.582057   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 01:00:45.588418   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 01:00:45.595517   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 01:00:45.602108   61267 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-653942 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-653942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.216 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:00:45.602213   61267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:00:45.602256   61267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:45.639538   61267 cri.go:89] found id: ""
	I0416 01:00:45.639621   61267 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 01:00:45.651216   61267 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 01:00:45.651245   61267 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 01:00:45.651252   61267 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 01:00:45.651307   61267 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 01:00:45.662522   61267 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 01:00:45.663697   61267 kubeconfig.go:125] found "default-k8s-diff-port-653942" server: "https://192.168.50.216:8444"
	I0416 01:00:45.666034   61267 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 01:00:45.675864   61267 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.216
	I0416 01:00:45.675900   61267 kubeadm.go:1154] stopping kube-system containers ...
	I0416 01:00:45.675927   61267 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 01:00:45.675992   61267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:45.718679   61267 cri.go:89] found id: ""
	I0416 01:00:45.718744   61267 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 01:00:45.737326   61267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:00:45.748122   61267 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:00:45.748146   61267 kubeadm.go:156] found existing configuration files:
	
	I0416 01:00:45.748200   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0416 01:00:45.758556   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:00:45.758618   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:00:45.769601   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0416 01:00:45.779361   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:00:45.779424   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:00:45.789283   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0416 01:00:45.798712   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:00:45.798805   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:00:45.808489   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0416 01:00:45.817400   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:00:45.817469   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:00:45.827902   61267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:00:45.838031   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:45.962948   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:46.862340   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:47.092144   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:47.170078   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:47.284634   61267 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:47.284719   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.830534   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:47.474148   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:44.100441   62747 pod_ready.go:102] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:47.472666   62747 pod_ready.go:102] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:47.599694   62747 pod_ready.go:92] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.599722   62747 pod_ready.go:81] duration metric: took 5.507276982s for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.599734   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.604479   62747 pod_ready.go:92] pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.604496   62747 pod_ready.go:81] duration metric: took 4.755735ms for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.604504   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xtdf4" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.608936   62747 pod_ready.go:92] pod "kube-proxy-xtdf4" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.608951   62747 pod_ready.go:81] duration metric: took 4.441482ms for pod "kube-proxy-xtdf4" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.608959   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.613108   62747 pod_ready.go:92] pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.613123   62747 pod_ready.go:81] duration metric: took 4.157722ms for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.613130   62747 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.545567   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.045898   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.545631   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:49.045678   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:49.545274   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:50.045281   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:50.545926   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:51.045076   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:51.545303   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:52.045271   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:47.785698   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.284828   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.315894   61267 api_server.go:72] duration metric: took 1.031258915s to wait for apiserver process to appear ...
	I0416 01:00:48.315925   61267 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:00:48.315950   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:51.781922   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:51.781957   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:51.781976   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:51.830460   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:51.830491   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:51.830505   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:51.858205   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:51.858240   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:52.316376   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:52.332667   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:52.332700   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:49.829236   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:52.329805   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:49.620626   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:51.620730   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:52.816565   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:52.827158   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:52.827191   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:53.316864   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:53.321112   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 200:
	ok
	I0416 01:00:53.329289   61267 api_server.go:141] control plane version: v1.29.3
	I0416 01:00:53.329320   61267 api_server.go:131] duration metric: took 5.013387579s to wait for apiserver health ...
	I0416 01:00:53.329331   61267 cni.go:84] Creating CNI manager for ""
	I0416 01:00:53.329340   61267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:53.331125   61267 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:00:52.545407   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.044961   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.545290   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:54.044994   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:54.545292   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:55.045285   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:55.545909   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:56.045029   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:56.545343   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:57.044988   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.332626   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:00:53.366364   61267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:00:53.401881   61267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:00:53.413478   61267 system_pods.go:59] 8 kube-system pods found
	I0416 01:00:53.413512   61267 system_pods.go:61] "coredns-76f75df574-cvlpq" [c200d470-26dd-40ea-a79b-29d9104122bb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:00:53.413527   61267 system_pods.go:61] "etcd-default-k8s-diff-port-653942" [24e85fc2-fb57-4ef6-9817-846207109e61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 01:00:53.413537   61267 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653942" [bd473e94-72a6-4391-b787-49e16e8a213f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 01:00:53.413547   61267 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653942" [31ed7183-a12b-422c-9e67-bba91147347a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 01:00:53.413555   61267 system_pods.go:61] "kube-proxy-6q9k7" [ba6d9cf9-37a5-4e01-9489-ce7395fd2a38] Running
	I0416 01:00:53.413563   61267 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653942" [4b481275-4ded-4251-963f-910954f10d15] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 01:00:53.413579   61267 system_pods.go:61] "metrics-server-57f55c9bc5-9cnv2" [24905ded-5bf8-4b34-8069-2e65c5ad8f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:00:53.413592   61267 system_pods.go:61] "storage-provisioner" [16ba28d0-2031-4c21-9c22-1b9289517449] Running
	I0416 01:00:53.413601   61267 system_pods.go:74] duration metric: took 11.695334ms to wait for pod list to return data ...
	I0416 01:00:53.413613   61267 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:00:53.417579   61267 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:00:53.417609   61267 node_conditions.go:123] node cpu capacity is 2
	I0416 01:00:53.417623   61267 node_conditions.go:105] duration metric: took 4.002735ms to run NodePressure ...
	I0416 01:00:53.417642   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:53.688389   61267 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 01:00:53.692755   61267 kubeadm.go:733] kubelet initialised
	I0416 01:00:53.692777   61267 kubeadm.go:734] duration metric: took 4.359298ms waiting for restarted kubelet to initialise ...
	I0416 01:00:53.692784   61267 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:00:53.698521   61267 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-cvlpq" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.704496   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "coredns-76f75df574-cvlpq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.704532   61267 pod_ready.go:81] duration metric: took 5.98382ms for pod "coredns-76f75df574-cvlpq" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.704543   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "coredns-76f75df574-cvlpq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.704550   61267 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.713110   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.713144   61267 pod_ready.go:81] duration metric: took 8.58568ms for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.713188   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.713201   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.718190   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.718210   61267 pod_ready.go:81] duration metric: took 4.997527ms for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.718219   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.718224   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.805697   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.805727   61267 pod_ready.go:81] duration metric: took 87.493805ms for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.805738   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.805743   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6q9k7" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:54.205884   61267 pod_ready.go:92] pod "kube-proxy-6q9k7" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:54.205911   61267 pod_ready.go:81] duration metric: took 400.161115ms for pod "kube-proxy-6q9k7" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:54.205921   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:56.213276   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:54.829391   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:57.330218   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:54.119995   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:56.121220   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:57.545333   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.045305   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.545871   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:59.045432   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:59.545000   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:00.045001   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:00.545855   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:01.045812   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:01.545477   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:02.045635   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.215064   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:00.215192   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:59.330599   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:01.831017   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:58.620594   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:01.120516   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:02.545690   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:03.045754   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:03.544965   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:04.045062   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:04.545196   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:05.045986   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:05.545246   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:06.045853   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:06.545863   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:07.045209   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:02.712971   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:04.713437   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:07.212886   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:04.328673   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:06.329726   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:03.124343   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:05.619912   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:07.622044   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:07.544952   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:08.045290   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:08.545296   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:09.045795   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:09.545932   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:10.045124   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:10.045209   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:10.087200   62139 cri.go:89] found id: ""
	I0416 01:01:10.087229   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.087237   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:10.087243   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:10.087300   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:10.126194   62139 cri.go:89] found id: ""
	I0416 01:01:10.126218   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.126225   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:10.126230   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:10.126275   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:10.165238   62139 cri.go:89] found id: ""
	I0416 01:01:10.165271   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.165282   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:10.165290   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:10.165357   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:10.202896   62139 cri.go:89] found id: ""
	I0416 01:01:10.202934   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.202945   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:10.202952   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:10.203015   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:10.243576   62139 cri.go:89] found id: ""
	I0416 01:01:10.243605   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.243613   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:10.243619   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:10.243667   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:10.278637   62139 cri.go:89] found id: ""
	I0416 01:01:10.278661   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.278669   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:10.278674   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:10.278726   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:10.316811   62139 cri.go:89] found id: ""
	I0416 01:01:10.316844   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.316852   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:10.316857   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:10.316914   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:10.359934   62139 cri.go:89] found id: ""
	I0416 01:01:10.359960   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.359967   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:10.359975   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:10.359987   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:10.413082   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:10.413119   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:10.428605   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:10.428632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:10.552536   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:10.552561   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:10.552578   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:10.615054   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:10.615091   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:08.213557   61267 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:01:08.213584   61267 pod_ready.go:81] duration metric: took 14.007657025s for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:01:08.213594   61267 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace to be "Ready" ...
	I0416 01:01:10.224984   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:08.831515   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:11.330529   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:10.122213   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:12.621939   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:13.160749   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:13.178449   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:13.178505   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:13.224192   62139 cri.go:89] found id: ""
	I0416 01:01:13.224215   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.224222   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:13.224228   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:13.224287   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:13.261441   62139 cri.go:89] found id: ""
	I0416 01:01:13.261469   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.261476   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:13.261481   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:13.261545   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:13.296602   62139 cri.go:89] found id: ""
	I0416 01:01:13.296636   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.296647   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:13.296654   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:13.296720   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:13.333944   62139 cri.go:89] found id: ""
	I0416 01:01:13.333968   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.333977   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:13.333984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:13.334049   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:13.372919   62139 cri.go:89] found id: ""
	I0416 01:01:13.372944   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.372957   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:13.372965   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:13.373022   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:13.413257   62139 cri.go:89] found id: ""
	I0416 01:01:13.413287   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.413299   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:13.413306   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:13.413373   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:13.451705   62139 cri.go:89] found id: ""
	I0416 01:01:13.451737   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.451748   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:13.451755   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:13.451836   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:13.492549   62139 cri.go:89] found id: ""
	I0416 01:01:13.492576   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.492586   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:13.492597   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:13.492613   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:13.547267   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:13.547303   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:13.568975   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:13.569002   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:13.674444   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:13.674469   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:13.674482   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:13.745111   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:13.745145   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:16.286955   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:16.301151   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:16.301257   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:16.337516   62139 cri.go:89] found id: ""
	I0416 01:01:16.337544   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.337554   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:16.337561   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:16.337623   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:16.372674   62139 cri.go:89] found id: ""
	I0416 01:01:16.372702   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.372712   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:16.372720   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:16.372783   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:16.411181   62139 cri.go:89] found id: ""
	I0416 01:01:16.411208   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.411224   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:16.411230   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:16.411283   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:16.449063   62139 cri.go:89] found id: ""
	I0416 01:01:16.449102   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.449109   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:16.449114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:16.449183   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:16.491877   62139 cri.go:89] found id: ""
	I0416 01:01:16.491909   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.491918   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:16.491924   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:16.491981   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:16.532522   62139 cri.go:89] found id: ""
	I0416 01:01:16.532553   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.532564   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:16.532572   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:16.532633   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:16.572194   62139 cri.go:89] found id: ""
	I0416 01:01:16.572222   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.572233   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:16.572240   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:16.572302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:16.614671   62139 cri.go:89] found id: ""
	I0416 01:01:16.614697   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.614704   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:16.614712   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:16.614726   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:16.632146   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:16.632179   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:16.707597   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:16.707621   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:16.707633   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:16.783604   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:16.783640   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:16.828937   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:16.828977   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:12.721088   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:15.220256   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:17.222263   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:13.830983   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:16.329120   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:15.119386   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:17.120038   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:19.385008   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:19.400949   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:19.401035   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:19.463792   62139 cri.go:89] found id: ""
	I0416 01:01:19.463825   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.463836   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:19.463843   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:19.463910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:19.523289   62139 cri.go:89] found id: ""
	I0416 01:01:19.523322   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.523332   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:19.523340   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:19.523392   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:19.558891   62139 cri.go:89] found id: ""
	I0416 01:01:19.558928   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.558939   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:19.558946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:19.559009   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:19.597876   62139 cri.go:89] found id: ""
	I0416 01:01:19.597905   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.597917   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:19.597925   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:19.597980   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:19.637536   62139 cri.go:89] found id: ""
	I0416 01:01:19.637563   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.637571   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:19.637576   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:19.637623   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:19.674414   62139 cri.go:89] found id: ""
	I0416 01:01:19.674447   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.674458   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:19.674465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:19.674525   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:19.709717   62139 cri.go:89] found id: ""
	I0416 01:01:19.709751   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.709761   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:19.709769   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:19.709837   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:19.747458   62139 cri.go:89] found id: ""
	I0416 01:01:19.747482   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.747489   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:19.747505   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:19.747523   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:19.834811   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:19.834846   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:19.876398   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:19.876428   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:19.931596   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:19.931632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:19.947074   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:19.947103   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:20.023434   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:19.720883   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:21.721969   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:18.829276   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:20.829405   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:19.120254   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:21.120520   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:22.524036   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:22.539399   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:22.539488   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:22.574696   62139 cri.go:89] found id: ""
	I0416 01:01:22.574723   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.574733   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:22.574741   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:22.574805   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:22.617474   62139 cri.go:89] found id: ""
	I0416 01:01:22.617503   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.617514   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:22.617521   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:22.617579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:22.657744   62139 cri.go:89] found id: ""
	I0416 01:01:22.657773   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.657781   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:22.657786   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:22.657842   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:22.695513   62139 cri.go:89] found id: ""
	I0416 01:01:22.695544   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.695552   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:22.695557   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:22.695606   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:22.732943   62139 cri.go:89] found id: ""
	I0416 01:01:22.732973   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.732983   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:22.732990   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:22.733051   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:22.768735   62139 cri.go:89] found id: ""
	I0416 01:01:22.768767   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.768775   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:22.768782   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:22.768842   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:22.804330   62139 cri.go:89] found id: ""
	I0416 01:01:22.804352   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.804361   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:22.804367   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:22.804425   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:22.842165   62139 cri.go:89] found id: ""
	I0416 01:01:22.842192   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.842199   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:22.842207   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:22.842219   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:22.921859   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:22.921880   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:22.921893   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:23.003432   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:23.003468   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:23.045446   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:23.045476   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:23.097327   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:23.097358   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:25.612297   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:25.627489   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:25.627565   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:25.664040   62139 cri.go:89] found id: ""
	I0416 01:01:25.664072   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.664083   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:25.664091   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:25.664149   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:25.701004   62139 cri.go:89] found id: ""
	I0416 01:01:25.701029   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.701036   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:25.701042   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:25.701087   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:25.740108   62139 cri.go:89] found id: ""
	I0416 01:01:25.740136   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.740144   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:25.740150   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:25.740194   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:25.778413   62139 cri.go:89] found id: ""
	I0416 01:01:25.778447   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.778458   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:25.778465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:25.778530   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:25.815188   62139 cri.go:89] found id: ""
	I0416 01:01:25.815215   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.815223   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:25.815230   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:25.815277   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:25.856370   62139 cri.go:89] found id: ""
	I0416 01:01:25.856402   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.856410   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:25.856416   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:25.856476   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:25.895363   62139 cri.go:89] found id: ""
	I0416 01:01:25.895388   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.895396   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:25.895402   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:25.895455   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:25.931854   62139 cri.go:89] found id: ""
	I0416 01:01:25.931881   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.931889   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:25.931897   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:25.931923   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:26.008395   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:26.008419   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:26.008436   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:26.087946   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:26.087983   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:26.134693   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:26.134725   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:26.189618   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:26.189652   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:24.220798   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:26.221193   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:22.833917   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:25.331147   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:27.331702   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:23.620819   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:25.621119   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:28.705010   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:28.719575   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:28.719644   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:28.759011   62139 cri.go:89] found id: ""
	I0416 01:01:28.759037   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.759044   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:28.759050   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:28.759112   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:28.794640   62139 cri.go:89] found id: ""
	I0416 01:01:28.794675   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.794687   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:28.794695   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:28.794807   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:28.835634   62139 cri.go:89] found id: ""
	I0416 01:01:28.835663   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.835674   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:28.835681   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:28.835747   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:28.875384   62139 cri.go:89] found id: ""
	I0416 01:01:28.875408   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.875426   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:28.875433   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:28.875484   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:28.921202   62139 cri.go:89] found id: ""
	I0416 01:01:28.921234   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.921244   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:28.921252   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:28.921314   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:28.958791   62139 cri.go:89] found id: ""
	I0416 01:01:28.958820   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.958828   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:28.958834   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:28.958923   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:28.996136   62139 cri.go:89] found id: ""
	I0416 01:01:28.996168   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.996179   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:28.996185   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:28.996259   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:29.033912   62139 cri.go:89] found id: ""
	I0416 01:01:29.033939   62139 logs.go:276] 0 containers: []
	W0416 01:01:29.033946   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:29.033954   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:29.033969   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:29.114162   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:29.114209   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:29.153934   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:29.153965   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:29.207548   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:29.207584   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:29.222158   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:29.222184   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:29.297414   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:31.798026   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:31.812740   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:31.812815   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:31.855058   62139 cri.go:89] found id: ""
	I0416 01:01:31.855087   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.855098   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:31.855105   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:31.855172   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:31.897128   62139 cri.go:89] found id: ""
	I0416 01:01:31.897170   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.897192   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:31.897200   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:31.897259   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:31.934497   62139 cri.go:89] found id: ""
	I0416 01:01:31.934520   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.934532   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:31.934541   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:31.934588   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:31.974020   62139 cri.go:89] found id: ""
	I0416 01:01:31.974051   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.974062   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:31.974093   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:31.974163   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:32.015433   62139 cri.go:89] found id: ""
	I0416 01:01:32.015460   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.015471   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:32.015477   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:32.015540   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:32.058286   62139 cri.go:89] found id: ""
	I0416 01:01:32.058336   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.058345   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:32.058351   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:32.058408   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:28.720596   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:30.720732   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:29.828996   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:31.830765   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:28.121038   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:30.619604   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:32.620210   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:32.100331   62139 cri.go:89] found id: ""
	I0416 01:01:32.102041   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.102054   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:32.102061   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:32.102115   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:32.141420   62139 cri.go:89] found id: ""
	I0416 01:01:32.141446   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.141454   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:32.141462   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:32.141473   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:32.195323   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:32.195364   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:32.210180   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:32.210206   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:32.282548   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:32.282570   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:32.282585   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:32.360627   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:32.360663   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:34.901239   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:34.917097   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:34.917205   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:34.959297   62139 cri.go:89] found id: ""
	I0416 01:01:34.959327   62139 logs.go:276] 0 containers: []
	W0416 01:01:34.959337   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:34.959344   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:34.959422   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:35.000927   62139 cri.go:89] found id: ""
	I0416 01:01:35.000974   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.000984   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:35.001000   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:35.001064   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:35.038049   62139 cri.go:89] found id: ""
	I0416 01:01:35.038073   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.038082   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:35.038090   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:35.038143   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:35.075396   62139 cri.go:89] found id: ""
	I0416 01:01:35.075467   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.075481   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:35.075490   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:35.075591   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:35.114297   62139 cri.go:89] found id: ""
	I0416 01:01:35.114325   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.114335   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:35.114343   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:35.114405   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:35.152075   62139 cri.go:89] found id: ""
	I0416 01:01:35.152099   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.152106   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:35.152112   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:35.152161   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:35.187945   62139 cri.go:89] found id: ""
	I0416 01:01:35.187974   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.187984   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:35.187991   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:35.188057   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:35.225225   62139 cri.go:89] found id: ""
	I0416 01:01:35.225253   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.225262   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:35.225272   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:35.225287   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:35.279584   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:35.279628   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:35.293416   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:35.293456   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:35.370122   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:35.370147   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:35.370159   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:35.451482   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:35.451517   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:32.723226   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:35.221390   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:34.329009   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:36.329761   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:34.620492   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:36.620527   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:37.994358   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:38.008209   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:38.008277   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:38.047905   62139 cri.go:89] found id: ""
	I0416 01:01:38.047943   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.047955   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:38.047962   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:38.048016   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:38.085749   62139 cri.go:89] found id: ""
	I0416 01:01:38.085780   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.085790   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:38.085797   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:38.085864   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:38.122396   62139 cri.go:89] found id: ""
	I0416 01:01:38.122419   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.122427   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:38.122432   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:38.122479   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:38.159284   62139 cri.go:89] found id: ""
	I0416 01:01:38.159313   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.159322   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:38.159329   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:38.159390   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:38.193245   62139 cri.go:89] found id: ""
	I0416 01:01:38.193280   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.193291   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:38.193298   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:38.193362   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:38.229147   62139 cri.go:89] found id: ""
	I0416 01:01:38.229179   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.229188   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:38.229194   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:38.229251   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:38.267285   62139 cri.go:89] found id: ""
	I0416 01:01:38.267309   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.267317   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:38.267321   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:38.267389   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:38.305181   62139 cri.go:89] found id: ""
	I0416 01:01:38.305207   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.305215   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:38.305222   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:38.305237   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:38.321714   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:38.321742   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:38.398352   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:38.398372   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:38.398382   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:38.474095   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:38.474129   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:38.520540   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:38.520581   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:41.072083   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:41.086767   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:41.086860   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:41.125119   62139 cri.go:89] found id: ""
	I0416 01:01:41.125149   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.125175   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:41.125182   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:41.125253   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:41.159885   62139 cri.go:89] found id: ""
	I0416 01:01:41.159915   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.159925   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:41.159931   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:41.160012   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:41.196334   62139 cri.go:89] found id: ""
	I0416 01:01:41.196366   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.196377   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:41.196385   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:41.196447   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:41.234254   62139 cri.go:89] found id: ""
	I0416 01:01:41.234282   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.234300   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:41.234319   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:41.234413   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:41.271499   62139 cri.go:89] found id: ""
	I0416 01:01:41.271523   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.271531   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:41.271536   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:41.271604   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:41.311064   62139 cri.go:89] found id: ""
	I0416 01:01:41.311096   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.311107   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:41.311114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:41.311179   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:41.349012   62139 cri.go:89] found id: ""
	I0416 01:01:41.349043   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.349053   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:41.349060   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:41.349117   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:41.385258   62139 cri.go:89] found id: ""
	I0416 01:01:41.385298   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.385305   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:41.385315   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:41.385330   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:41.470086   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:41.470130   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:41.513835   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:41.513870   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:41.565980   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:41.566013   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:41.582647   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:41.582678   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:41.658928   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:37.724628   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:40.222025   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:38.329899   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:40.330143   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:39.120850   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:41.121383   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:44.159107   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:44.173015   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:44.173088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:44.214310   62139 cri.go:89] found id: ""
	I0416 01:01:44.214345   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.214363   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:44.214374   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:44.214462   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:44.256476   62139 cri.go:89] found id: ""
	I0416 01:01:44.256503   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.256511   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:44.256516   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:44.256577   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:44.298047   62139 cri.go:89] found id: ""
	I0416 01:01:44.298079   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.298089   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:44.298097   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:44.298158   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:44.339165   62139 cri.go:89] found id: ""
	I0416 01:01:44.339196   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.339206   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:44.339213   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:44.339280   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:44.378078   62139 cri.go:89] found id: ""
	I0416 01:01:44.378108   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.378116   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:44.378122   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:44.378170   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:44.421494   62139 cri.go:89] found id: ""
	I0416 01:01:44.421525   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.421536   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:44.421543   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:44.421609   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:44.459919   62139 cri.go:89] found id: ""
	I0416 01:01:44.459948   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.459958   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:44.459965   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:44.460025   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:44.499448   62139 cri.go:89] found id: ""
	I0416 01:01:44.499479   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.499489   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:44.499500   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:44.499516   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:44.555122   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:44.555159   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:44.572048   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:44.572075   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:44.646252   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:44.646283   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:44.646299   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:44.730593   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:44.730620   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:42.720855   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:44.723141   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:46.723452   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:42.831045   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:45.329039   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:47.331355   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:43.619897   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:45.620068   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:47.620162   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:47.276658   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:47.291354   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:47.291431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:47.334998   62139 cri.go:89] found id: ""
	I0416 01:01:47.335036   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.335055   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:47.335062   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:47.335121   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:47.376546   62139 cri.go:89] found id: ""
	I0416 01:01:47.376575   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.376582   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:47.376587   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:47.376647   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:47.418609   62139 cri.go:89] found id: ""
	I0416 01:01:47.418642   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.418654   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:47.418661   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:47.418721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:47.459432   62139 cri.go:89] found id: ""
	I0416 01:01:47.459458   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.459465   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:47.459470   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:47.459518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:47.497776   62139 cri.go:89] found id: ""
	I0416 01:01:47.497800   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.497808   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:47.497813   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:47.497866   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:47.536803   62139 cri.go:89] found id: ""
	I0416 01:01:47.536835   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.536842   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:47.536849   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:47.536916   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:47.575883   62139 cri.go:89] found id: ""
	I0416 01:01:47.575916   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.575923   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:47.575931   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:47.575976   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:47.627676   62139 cri.go:89] found id: ""
	I0416 01:01:47.627697   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.627703   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:47.627711   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:47.627725   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:47.669714   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:47.669745   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:47.721349   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:47.721389   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:47.735833   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:47.735859   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:47.806890   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:47.806913   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:47.806925   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:50.386960   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:50.400832   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:50.400901   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:50.443042   62139 cri.go:89] found id: ""
	I0416 01:01:50.443076   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.443086   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:50.443094   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:50.443157   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:50.480495   62139 cri.go:89] found id: ""
	I0416 01:01:50.480526   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.480536   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:50.480544   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:50.480602   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:50.516578   62139 cri.go:89] found id: ""
	I0416 01:01:50.516605   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.516613   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:50.516618   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:50.516676   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:50.555302   62139 cri.go:89] found id: ""
	I0416 01:01:50.555330   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.555337   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:50.555344   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:50.555388   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:50.594647   62139 cri.go:89] found id: ""
	I0416 01:01:50.594674   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.594682   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:50.594688   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:50.594737   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:50.633401   62139 cri.go:89] found id: ""
	I0416 01:01:50.633428   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.633436   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:50.633442   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:50.633501   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:50.673714   62139 cri.go:89] found id: ""
	I0416 01:01:50.673744   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.673755   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:50.673763   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:50.673811   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:50.710103   62139 cri.go:89] found id: ""
	I0416 01:01:50.710127   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.710134   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:50.710142   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:50.710153   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:50.765121   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:50.765168   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:50.780407   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:50.780436   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:50.855602   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:50.855635   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:50.855663   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:50.937249   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:50.937283   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:49.220483   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:51.724129   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:49.829742   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:52.330579   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:49.621383   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:52.120841   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:53.481261   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:53.495872   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:53.495931   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:53.532710   62139 cri.go:89] found id: ""
	I0416 01:01:53.532738   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.532748   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:53.532756   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:53.532815   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:53.568734   62139 cri.go:89] found id: ""
	I0416 01:01:53.568763   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.568770   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:53.568776   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:53.568841   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:53.608937   62139 cri.go:89] found id: ""
	I0416 01:01:53.608965   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.608976   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:53.608984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:53.609042   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:53.646538   62139 cri.go:89] found id: ""
	I0416 01:01:53.646573   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.646585   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:53.646592   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:53.646657   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:53.687761   62139 cri.go:89] found id: ""
	I0416 01:01:53.687792   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.687801   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:53.687809   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:53.687872   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:53.726126   62139 cri.go:89] found id: ""
	I0416 01:01:53.726161   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.726169   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:53.726174   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:53.726224   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:53.762583   62139 cri.go:89] found id: ""
	I0416 01:01:53.762609   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.762618   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:53.762625   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:53.762695   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:53.803685   62139 cri.go:89] found id: ""
	I0416 01:01:53.803715   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.803726   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:53.803737   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:53.803751   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:53.862215   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:53.862255   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:53.877713   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:53.877743   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:53.953394   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:53.953422   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:53.953438   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:54.044657   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:54.044698   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:56.602100   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:56.616548   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:56.616632   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:56.653765   62139 cri.go:89] found id: ""
	I0416 01:01:56.653794   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.653810   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:56.653817   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:56.653879   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:56.691394   62139 cri.go:89] found id: ""
	I0416 01:01:56.691416   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.691422   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:56.691428   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:56.691475   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:56.728995   62139 cri.go:89] found id: ""
	I0416 01:01:56.729017   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.729024   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:56.729029   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:56.729078   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:56.769119   62139 cri.go:89] found id: ""
	I0416 01:01:56.769184   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.769196   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:56.769204   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:56.769270   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:56.810562   62139 cri.go:89] found id: ""
	I0416 01:01:56.810589   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.810597   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:56.810608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:56.810669   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:56.849367   62139 cri.go:89] found id: ""
	I0416 01:01:56.849392   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.849399   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:56.849405   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:56.849464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:56.887330   62139 cri.go:89] found id: ""
	I0416 01:01:56.887359   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.887370   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:56.887378   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:56.887461   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:56.926636   62139 cri.go:89] found id: ""
	I0416 01:01:56.926664   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.926672   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:56.926682   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:56.926697   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:56.981836   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:56.981875   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:56.996385   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:56.996411   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:57.071026   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:57.071054   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:57.071070   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:54.219668   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:56.221212   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:54.829549   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:56.831452   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:54.619864   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:56.620968   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:57.155430   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:57.155466   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:59.701547   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:59.714465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:59.714526   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:59.759791   62139 cri.go:89] found id: ""
	I0416 01:01:59.759830   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.759841   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:59.759849   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:59.759914   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:59.813303   62139 cri.go:89] found id: ""
	I0416 01:01:59.813334   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.813343   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:59.813353   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:59.813406   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:59.872291   62139 cri.go:89] found id: ""
	I0416 01:01:59.872328   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.872338   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:59.872347   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:59.872423   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:59.910397   62139 cri.go:89] found id: ""
	I0416 01:01:59.910425   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.910437   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:59.910444   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:59.910512   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:59.953656   62139 cri.go:89] found id: ""
	I0416 01:01:59.953685   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.953695   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:59.953703   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:59.953779   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:59.993193   62139 cri.go:89] found id: ""
	I0416 01:01:59.993220   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.993229   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:59.993239   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:59.993298   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:00.030205   62139 cri.go:89] found id: ""
	I0416 01:02:00.030229   62139 logs.go:276] 0 containers: []
	W0416 01:02:00.030237   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:00.030242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:00.030302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:00.068160   62139 cri.go:89] found id: ""
	I0416 01:02:00.068189   62139 logs.go:276] 0 containers: []
	W0416 01:02:00.068199   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:00.068211   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:00.068226   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:00.149383   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:00.149416   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:00.188000   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:00.188025   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:00.240522   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:00.240550   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:00.254189   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:00.254215   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:00.331483   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:58.721272   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:01.220698   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:59.329440   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:01.830408   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:59.122269   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:01.619839   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:02.832656   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:02.846826   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:02.846907   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:02.883397   62139 cri.go:89] found id: ""
	I0416 01:02:02.883428   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.883439   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:02.883446   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:02.883499   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:02.923686   62139 cri.go:89] found id: ""
	I0416 01:02:02.923708   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.923715   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:02.923719   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:02.923770   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:02.964155   62139 cri.go:89] found id: ""
	I0416 01:02:02.964180   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.964188   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:02.964193   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:02.964247   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:03.005357   62139 cri.go:89] found id: ""
	I0416 01:02:03.005386   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.005396   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:03.005403   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:03.005464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:03.047221   62139 cri.go:89] found id: ""
	I0416 01:02:03.047246   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.047257   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:03.047264   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:03.047326   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:03.088737   62139 cri.go:89] found id: ""
	I0416 01:02:03.088767   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.088776   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:03.088784   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:03.088846   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:03.129756   62139 cri.go:89] found id: ""
	I0416 01:02:03.129778   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.129785   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:03.129790   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:03.129837   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:03.169422   62139 cri.go:89] found id: ""
	I0416 01:02:03.169447   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.169459   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:03.169468   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:03.169478   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:03.246485   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:03.246503   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:03.246514   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:03.326498   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:03.326533   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:03.372788   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:03.372817   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:03.428561   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:03.428603   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:05.944274   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:05.957744   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:05.957813   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:05.993348   62139 cri.go:89] found id: ""
	I0416 01:02:05.993400   62139 logs.go:276] 0 containers: []
	W0416 01:02:05.993411   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:05.993430   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:05.993497   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:06.034811   62139 cri.go:89] found id: ""
	I0416 01:02:06.034848   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.034859   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:06.034866   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:06.034953   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:06.079047   62139 cri.go:89] found id: ""
	I0416 01:02:06.079070   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.079078   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:06.079082   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:06.079127   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:06.122494   62139 cri.go:89] found id: ""
	I0416 01:02:06.122513   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.122520   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:06.122525   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:06.122589   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:06.163436   62139 cri.go:89] found id: ""
	I0416 01:02:06.163461   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.163468   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:06.163473   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:06.163534   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:06.205036   62139 cri.go:89] found id: ""
	I0416 01:02:06.205064   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.205072   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:06.205077   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:06.205134   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:06.242056   62139 cri.go:89] found id: ""
	I0416 01:02:06.242084   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.242094   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:06.242107   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:06.242166   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:06.278604   62139 cri.go:89] found id: ""
	I0416 01:02:06.278636   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.278646   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:06.278656   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:06.278671   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:06.334631   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:06.334658   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:06.348199   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:06.348227   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:06.424774   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:06.424793   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:06.424804   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:06.503509   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:06.503542   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:03.221238   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:05.721006   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:04.329267   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:06.329476   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:03.620957   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:06.121348   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:09.046665   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:09.061072   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:09.061173   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:09.097482   62139 cri.go:89] found id: ""
	I0416 01:02:09.097514   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.097524   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:09.097543   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:09.097613   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:09.135124   62139 cri.go:89] found id: ""
	I0416 01:02:09.135157   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.135168   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:09.135175   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:09.135236   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:09.173887   62139 cri.go:89] found id: ""
	I0416 01:02:09.173912   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.173920   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:09.173925   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:09.173983   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:09.209658   62139 cri.go:89] found id: ""
	I0416 01:02:09.209683   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.209691   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:09.209702   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:09.209763   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:09.249149   62139 cri.go:89] found id: ""
	I0416 01:02:09.249200   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.249209   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:09.249214   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:09.249292   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:09.291447   62139 cri.go:89] found id: ""
	I0416 01:02:09.291477   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.291487   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:09.291494   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:09.291553   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:09.329248   62139 cri.go:89] found id: ""
	I0416 01:02:09.329271   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.329281   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:09.329288   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:09.329345   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:09.365585   62139 cri.go:89] found id: ""
	I0416 01:02:09.365613   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.365622   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:09.365632   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:09.365645   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:09.418998   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:09.419031   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:09.433531   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:09.433558   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:09.508543   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:09.508573   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:09.508588   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:09.593889   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:09.593930   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:08.220704   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:10.221232   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.224680   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:08.330281   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:10.828856   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:08.619632   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:10.619780   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.621319   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.139020   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:12.154268   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:12.154349   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:12.192717   62139 cri.go:89] found id: ""
	I0416 01:02:12.192746   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.192758   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:12.192765   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:12.192832   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:12.230633   62139 cri.go:89] found id: ""
	I0416 01:02:12.230662   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.230674   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:12.230681   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:12.230729   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:12.271108   62139 cri.go:89] found id: ""
	I0416 01:02:12.271150   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.271161   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:12.271168   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:12.271233   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:12.310161   62139 cri.go:89] found id: ""
	I0416 01:02:12.310186   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.310194   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:12.310201   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:12.310272   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:12.349638   62139 cri.go:89] found id: ""
	I0416 01:02:12.349668   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.349678   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:12.349686   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:12.349766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:12.391565   62139 cri.go:89] found id: ""
	I0416 01:02:12.391597   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.391607   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:12.391620   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:12.391681   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:12.429142   62139 cri.go:89] found id: ""
	I0416 01:02:12.429186   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.429195   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:12.429200   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:12.429249   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:12.466209   62139 cri.go:89] found id: ""
	I0416 01:02:12.466238   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.466249   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:12.466260   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:12.466277   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:12.551333   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:12.551355   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:12.551367   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:12.634465   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:12.634496   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:12.675198   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:12.675231   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:12.728933   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:12.728962   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:15.243521   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:15.258589   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:15.258657   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:15.301901   62139 cri.go:89] found id: ""
	I0416 01:02:15.301931   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.301943   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:15.301951   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:15.302006   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:15.345932   62139 cri.go:89] found id: ""
	I0416 01:02:15.346011   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.346032   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:15.346043   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:15.346113   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:15.387957   62139 cri.go:89] found id: ""
	I0416 01:02:15.387983   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.387991   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:15.387996   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:15.388044   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:15.424887   62139 cri.go:89] found id: ""
	I0416 01:02:15.424916   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.424927   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:15.424934   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:15.424996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:15.460088   62139 cri.go:89] found id: ""
	I0416 01:02:15.460113   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.460120   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:15.460125   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:15.460172   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:15.495567   62139 cri.go:89] found id: ""
	I0416 01:02:15.495597   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.495607   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:15.495615   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:15.495692   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:15.533901   62139 cri.go:89] found id: ""
	I0416 01:02:15.533931   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.533940   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:15.533946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:15.533996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:15.576665   62139 cri.go:89] found id: ""
	I0416 01:02:15.576692   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.576702   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:15.576712   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:15.576728   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:15.626933   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:15.626961   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:15.681627   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:15.681656   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:15.695572   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:15.695608   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:15.768910   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:15.768934   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:15.768945   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:14.720472   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:16.722418   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.830086   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:14.830540   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:17.329838   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:15.120394   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:17.120523   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:18.349776   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:18.363499   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:18.363568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:18.404210   62139 cri.go:89] found id: ""
	I0416 01:02:18.404234   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.404241   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:18.404246   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:18.404304   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:18.444610   62139 cri.go:89] found id: ""
	I0416 01:02:18.444641   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.444651   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:18.444658   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:18.444722   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:18.483134   62139 cri.go:89] found id: ""
	I0416 01:02:18.483160   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.483168   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:18.483173   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:18.483220   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:18.522120   62139 cri.go:89] found id: ""
	I0416 01:02:18.522144   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.522156   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:18.522161   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:18.522205   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:18.566293   62139 cri.go:89] found id: ""
	I0416 01:02:18.566319   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.566327   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:18.566332   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:18.566391   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:18.604000   62139 cri.go:89] found id: ""
	I0416 01:02:18.604028   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.604036   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:18.604042   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:18.604089   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:18.641967   62139 cri.go:89] found id: ""
	I0416 01:02:18.641999   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.642009   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:18.642016   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:18.642080   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:18.683494   62139 cri.go:89] found id: ""
	I0416 01:02:18.683533   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.683544   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:18.683555   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:18.683570   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:18.761674   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:18.761699   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:18.761714   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:18.849959   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:18.849995   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:18.895534   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:18.895570   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:18.949287   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:18.949320   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:21.464393   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:21.479019   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:21.479087   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:21.516262   62139 cri.go:89] found id: ""
	I0416 01:02:21.516303   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.516313   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:21.516323   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:21.516385   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:21.554279   62139 cri.go:89] found id: ""
	I0416 01:02:21.554315   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.554327   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:21.554334   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:21.554393   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:21.590889   62139 cri.go:89] found id: ""
	I0416 01:02:21.590918   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.590928   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:21.590935   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:21.590996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:21.629925   62139 cri.go:89] found id: ""
	I0416 01:02:21.629955   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.629965   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:21.629972   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:21.630032   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:21.667947   62139 cri.go:89] found id: ""
	I0416 01:02:21.667975   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.667983   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:21.667988   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:21.668045   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:21.706275   62139 cri.go:89] found id: ""
	I0416 01:02:21.706308   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.706318   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:21.706326   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:21.706392   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:21.748077   62139 cri.go:89] found id: ""
	I0416 01:02:21.748106   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.748117   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:21.748123   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:21.748170   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:21.785441   62139 cri.go:89] found id: ""
	I0416 01:02:21.785467   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.785477   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:21.785488   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:21.785510   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:21.824702   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:21.824735   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:21.882780   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:21.882810   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:21.897211   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:21.897236   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:21.971882   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:21.971903   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:21.971915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:19.220913   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:21.721219   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:19.330086   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:21.836759   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:19.620521   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:21.621229   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:24.550749   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:24.564951   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:24.565024   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:24.605025   62139 cri.go:89] found id: ""
	I0416 01:02:24.605055   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.605063   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:24.605068   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:24.605142   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:24.640727   62139 cri.go:89] found id: ""
	I0416 01:02:24.640757   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.640764   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:24.640769   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:24.640822   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:24.678031   62139 cri.go:89] found id: ""
	I0416 01:02:24.678060   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.678068   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:24.678074   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:24.678125   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:24.714854   62139 cri.go:89] found id: ""
	I0416 01:02:24.714896   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.714907   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:24.714914   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:24.714981   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:24.752129   62139 cri.go:89] found id: ""
	I0416 01:02:24.752158   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.752168   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:24.752177   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:24.752243   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:24.788507   62139 cri.go:89] found id: ""
	I0416 01:02:24.788541   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.788551   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:24.788557   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:24.788617   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:24.828379   62139 cri.go:89] found id: ""
	I0416 01:02:24.828409   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.828419   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:24.828427   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:24.828486   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:24.865676   62139 cri.go:89] found id: ""
	I0416 01:02:24.865707   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.865717   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:24.865725   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:24.865736   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:24.941057   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:24.941079   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:24.941091   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:25.025937   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:25.025979   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:25.065828   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:25.065871   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:25.128004   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:25.128039   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:24.221435   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:26.720181   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:24.329677   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:26.329901   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:24.119781   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:26.120316   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:27.643201   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:27.658601   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:27.658660   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:27.700627   62139 cri.go:89] found id: ""
	I0416 01:02:27.700650   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.700657   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:27.700662   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:27.700718   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:27.734929   62139 cri.go:89] found id: ""
	I0416 01:02:27.734957   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.734966   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:27.734975   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:27.735046   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:27.772412   62139 cri.go:89] found id: ""
	I0416 01:02:27.772440   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.772448   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:27.772454   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:27.772514   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:27.809436   62139 cri.go:89] found id: ""
	I0416 01:02:27.809459   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.809466   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:27.809471   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:27.809518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:27.845717   62139 cri.go:89] found id: ""
	I0416 01:02:27.845746   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.845756   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:27.845764   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:27.845825   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:27.887224   62139 cri.go:89] found id: ""
	I0416 01:02:27.887250   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.887260   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:27.887267   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:27.887334   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:27.920945   62139 cri.go:89] found id: ""
	I0416 01:02:27.920974   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.920984   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:27.920992   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:27.921066   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:27.960933   62139 cri.go:89] found id: ""
	I0416 01:02:27.960959   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.960966   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:27.960974   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:27.960985   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:28.013003   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:28.013033   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:28.026599   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:28.026626   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:28.117200   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:28.117226   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:28.117240   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:28.198003   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:28.198036   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:30.741379   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:30.757102   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:30.757199   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:30.798038   62139 cri.go:89] found id: ""
	I0416 01:02:30.798068   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.798075   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:30.798080   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:30.798137   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:30.844840   62139 cri.go:89] found id: ""
	I0416 01:02:30.844862   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.844871   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:30.844877   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:30.844944   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:30.883816   62139 cri.go:89] found id: ""
	I0416 01:02:30.883841   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.883849   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:30.883855   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:30.883903   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:30.919353   62139 cri.go:89] found id: ""
	I0416 01:02:30.919380   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.919389   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:30.919396   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:30.919457   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:30.957036   62139 cri.go:89] found id: ""
	I0416 01:02:30.957061   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.957069   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:30.957084   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:30.957143   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:30.993179   62139 cri.go:89] found id: ""
	I0416 01:02:30.993211   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.993220   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:30.993228   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:30.993315   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:31.032634   62139 cri.go:89] found id: ""
	I0416 01:02:31.032661   62139 logs.go:276] 0 containers: []
	W0416 01:02:31.032670   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:31.032684   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:31.032753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:31.069345   62139 cri.go:89] found id: ""
	I0416 01:02:31.069373   62139 logs.go:276] 0 containers: []
	W0416 01:02:31.069382   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:31.069392   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:31.069408   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:31.123989   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:31.124017   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:31.140998   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:31.141032   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:31.217496   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:31.218063   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:31.218098   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:31.296811   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:31.296858   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:28.720502   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:30.720709   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:28.329978   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:30.829406   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:28.121200   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:30.620659   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:33.842516   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:33.872440   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:33.872518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:33.909287   62139 cri.go:89] found id: ""
	I0416 01:02:33.909314   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.909324   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:33.909329   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:33.909388   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:33.947531   62139 cri.go:89] found id: ""
	I0416 01:02:33.947566   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.947576   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:33.947584   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:33.947642   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:33.990084   62139 cri.go:89] found id: ""
	I0416 01:02:33.990118   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.990129   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:33.990136   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:33.990200   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:34.024121   62139 cri.go:89] found id: ""
	I0416 01:02:34.024151   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.024159   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:34.024165   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:34.024218   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:34.061075   62139 cri.go:89] found id: ""
	I0416 01:02:34.061104   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.061111   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:34.061116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:34.061179   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:34.097887   62139 cri.go:89] found id: ""
	I0416 01:02:34.097928   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.097938   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:34.097946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:34.098007   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:34.135541   62139 cri.go:89] found id: ""
	I0416 01:02:34.135567   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.135577   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:34.135585   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:34.135637   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:34.170884   62139 cri.go:89] found id: ""
	I0416 01:02:34.170910   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.170920   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:34.170931   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:34.170946   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:34.223465   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:34.223494   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:34.238898   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:34.238929   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:34.316916   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:34.316946   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:34.316962   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:34.401564   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:34.401600   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:36.945789   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:36.959707   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:36.959774   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:36.994463   62139 cri.go:89] found id: ""
	I0416 01:02:36.994497   62139 logs.go:276] 0 containers: []
	W0416 01:02:36.994508   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:36.994515   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:36.994579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:37.028847   62139 cri.go:89] found id: ""
	I0416 01:02:37.028877   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.028887   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:37.028893   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:37.028954   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:37.061841   62139 cri.go:89] found id: ""
	I0416 01:02:37.061872   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.061882   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:37.061889   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:37.061954   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:37.098460   62139 cri.go:89] found id: ""
	I0416 01:02:37.098485   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.098495   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:37.098502   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:37.098569   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:33.220794   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:35.221650   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:37.222563   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:32.829517   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:34.829762   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:36.831773   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:33.121842   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:35.620647   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:37.620795   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:37.133016   62139 cri.go:89] found id: ""
	I0416 01:02:37.133044   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.133053   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:37.133059   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:37.133122   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:37.170252   62139 cri.go:89] found id: ""
	I0416 01:02:37.170276   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.170286   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:37.170293   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:37.170354   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:37.206114   62139 cri.go:89] found id: ""
	I0416 01:02:37.206141   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.206148   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:37.206153   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:37.206208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:37.241353   62139 cri.go:89] found id: ""
	I0416 01:02:37.241383   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.241395   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:37.241405   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:37.241429   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:37.293452   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:37.293483   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:37.309885   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:37.309926   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:37.385455   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:37.385481   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:37.385496   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:37.463064   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:37.463101   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:40.008717   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:40.022249   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:40.022327   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:40.064444   62139 cri.go:89] found id: ""
	I0416 01:02:40.064479   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.064490   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:40.064497   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:40.064545   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:40.100326   62139 cri.go:89] found id: ""
	I0416 01:02:40.100353   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.100361   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:40.100366   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:40.100413   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:40.138818   62139 cri.go:89] found id: ""
	I0416 01:02:40.138857   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.138869   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:40.138878   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:40.138928   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:40.184203   62139 cri.go:89] found id: ""
	I0416 01:02:40.184234   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.184244   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:40.184252   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:40.184311   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:40.221968   62139 cri.go:89] found id: ""
	I0416 01:02:40.221991   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.221998   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:40.222007   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:40.222088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:40.265621   62139 cri.go:89] found id: ""
	I0416 01:02:40.265643   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.265650   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:40.265657   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:40.265723   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:40.314121   62139 cri.go:89] found id: ""
	I0416 01:02:40.314152   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.314163   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:40.314170   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:40.314229   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:40.359788   62139 cri.go:89] found id: ""
	I0416 01:02:40.359825   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.359836   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:40.359849   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:40.359863   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:40.431678   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:40.431718   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:40.449847   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:40.449877   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:40.524271   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:40.524297   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:40.524309   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:40.601398   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:40.601433   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:39.720606   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:41.721437   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:39.330974   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:41.830050   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:40.120785   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:42.123996   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:43.145431   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:43.160269   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:43.160338   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:43.196603   62139 cri.go:89] found id: ""
	I0416 01:02:43.196637   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.196648   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:43.196655   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:43.196716   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:43.235863   62139 cri.go:89] found id: ""
	I0416 01:02:43.235893   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.235905   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:43.235911   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:43.235971   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:43.271408   62139 cri.go:89] found id: ""
	I0416 01:02:43.271437   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.271444   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:43.271450   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:43.271512   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:43.310931   62139 cri.go:89] found id: ""
	I0416 01:02:43.310958   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.310965   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:43.310971   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:43.311032   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:43.347472   62139 cri.go:89] found id: ""
	I0416 01:02:43.347502   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.347512   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:43.347520   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:43.347581   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:43.387326   62139 cri.go:89] found id: ""
	I0416 01:02:43.387361   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.387372   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:43.387429   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:43.387506   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:43.425099   62139 cri.go:89] found id: ""
	I0416 01:02:43.425122   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.425130   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:43.425141   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:43.425208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:43.461364   62139 cri.go:89] found id: ""
	I0416 01:02:43.461397   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.461408   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:43.461419   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:43.461434   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:43.514520   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:43.514556   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:43.528740   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:43.528777   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:43.599010   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:43.599035   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:43.599051   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:43.682913   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:43.682959   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:46.231398   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:46.260247   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:46.260338   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:46.304498   62139 cri.go:89] found id: ""
	I0416 01:02:46.304521   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.304528   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:46.304534   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:46.304600   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:46.364055   62139 cri.go:89] found id: ""
	I0416 01:02:46.364081   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.364090   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:46.364098   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:46.364167   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:46.412395   62139 cri.go:89] found id: ""
	I0416 01:02:46.412437   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.412475   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:46.412510   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:46.412584   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:46.453669   62139 cri.go:89] found id: ""
	I0416 01:02:46.453698   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.453709   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:46.453716   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:46.453766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:46.490667   62139 cri.go:89] found id: ""
	I0416 01:02:46.490699   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.490709   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:46.490715   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:46.490766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:46.529405   62139 cri.go:89] found id: ""
	I0416 01:02:46.529443   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.529460   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:46.529467   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:46.529527   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:46.565359   62139 cri.go:89] found id: ""
	I0416 01:02:46.565384   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.565391   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:46.565396   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:46.565451   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:46.609381   62139 cri.go:89] found id: ""
	I0416 01:02:46.609406   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.609413   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:46.609421   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:46.609432   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:46.663080   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:46.663112   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:46.677303   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:46.677338   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:46.750134   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:46.750163   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:46.750175   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:46.829395   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:46.829434   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:43.721477   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:46.220462   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:43.831829   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:46.329333   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:44.619712   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:46.621271   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:49.374356   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:49.390674   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:49.390753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:49.427968   62139 cri.go:89] found id: ""
	I0416 01:02:49.427993   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.428000   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:49.428005   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:49.428058   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:49.461821   62139 cri.go:89] found id: ""
	I0416 01:02:49.461850   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.461857   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:49.461863   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:49.461918   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:49.496305   62139 cri.go:89] found id: ""
	I0416 01:02:49.496356   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.496364   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:49.496369   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:49.496429   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:49.536096   62139 cri.go:89] found id: ""
	I0416 01:02:49.536122   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.536129   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:49.536134   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:49.536194   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:49.572078   62139 cri.go:89] found id: ""
	I0416 01:02:49.572106   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.572115   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:49.572122   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:49.572181   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:49.607803   62139 cri.go:89] found id: ""
	I0416 01:02:49.607835   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.607847   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:49.607861   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:49.607915   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:49.651245   62139 cri.go:89] found id: ""
	I0416 01:02:49.651272   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.651280   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:49.651285   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:49.651332   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:49.693587   62139 cri.go:89] found id: ""
	I0416 01:02:49.693612   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.693622   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:49.693632   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:49.693646   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:49.750003   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:49.750032   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:49.764447   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:49.764472   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:49.844739   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:49.844764   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:49.844780   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:49.924260   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:49.924294   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:48.220753   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:50.220986   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:48.330946   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:50.829409   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:49.120516   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:51.619516   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:52.467399   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:52.481656   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:52.481729   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:52.518506   62139 cri.go:89] found id: ""
	I0416 01:02:52.518531   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.518537   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:52.518544   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:52.518599   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:52.554799   62139 cri.go:89] found id: ""
	I0416 01:02:52.554820   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.554827   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:52.554832   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:52.554888   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:52.597236   62139 cri.go:89] found id: ""
	I0416 01:02:52.597265   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.597272   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:52.597278   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:52.597335   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:52.635544   62139 cri.go:89] found id: ""
	I0416 01:02:52.635567   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.635578   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:52.635585   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:52.635639   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:52.672715   62139 cri.go:89] found id: ""
	I0416 01:02:52.672739   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.672746   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:52.672751   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:52.672808   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:52.711600   62139 cri.go:89] found id: ""
	I0416 01:02:52.711631   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.711640   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:52.711648   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:52.711718   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:52.750372   62139 cri.go:89] found id: ""
	I0416 01:02:52.750405   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.750416   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:52.750423   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:52.750486   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:52.786651   62139 cri.go:89] found id: ""
	I0416 01:02:52.786678   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.786688   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:52.786698   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:52.786712   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:52.840262   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:52.840296   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:52.854734   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:52.854762   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:52.931182   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:52.931211   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:52.931226   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:53.007023   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:53.007061   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:55.548305   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:55.562483   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:55.562562   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:55.599480   62139 cri.go:89] found id: ""
	I0416 01:02:55.599504   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.599511   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:55.599517   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:55.599573   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:55.636832   62139 cri.go:89] found id: ""
	I0416 01:02:55.636862   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.636873   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:55.636879   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:55.636940   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:55.676211   62139 cri.go:89] found id: ""
	I0416 01:02:55.676240   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.676250   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:55.676256   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:55.676318   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:55.713498   62139 cri.go:89] found id: ""
	I0416 01:02:55.713527   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.713537   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:55.713544   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:55.713604   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:55.754239   62139 cri.go:89] found id: ""
	I0416 01:02:55.754276   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.754284   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:55.754301   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:55.754355   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:55.792073   62139 cri.go:89] found id: ""
	I0416 01:02:55.792106   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.792117   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:55.792125   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:55.792191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:55.829635   62139 cri.go:89] found id: ""
	I0416 01:02:55.829665   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.829676   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:55.829683   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:55.829742   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:55.876417   62139 cri.go:89] found id: ""
	I0416 01:02:55.876443   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.876450   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:55.876458   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:55.876471   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:55.926670   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:55.926707   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:55.941660   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:55.941696   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:56.018776   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:56.018806   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:56.018820   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:56.097335   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:56.097378   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:52.720703   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:55.221614   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:52.830970   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:55.329886   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:53.620969   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:56.122135   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:58.642188   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:58.655537   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:58.655605   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:58.692091   62139 cri.go:89] found id: ""
	I0416 01:02:58.692116   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.692124   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:58.692129   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:58.692191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:58.729434   62139 cri.go:89] found id: ""
	I0416 01:02:58.729461   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.729472   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:58.729491   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:58.729568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:58.765879   62139 cri.go:89] found id: ""
	I0416 01:02:58.765907   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.765916   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:58.765924   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:58.765987   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:58.802285   62139 cri.go:89] found id: ""
	I0416 01:02:58.802323   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.802334   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:58.802342   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:58.802399   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:58.841357   62139 cri.go:89] found id: ""
	I0416 01:02:58.841385   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.841396   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:58.841403   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:58.841464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:58.876982   62139 cri.go:89] found id: ""
	I0416 01:02:58.877022   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.877032   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:58.877040   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:58.877108   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:58.915563   62139 cri.go:89] found id: ""
	I0416 01:02:58.915596   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.915607   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:58.915614   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:58.915683   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:58.951268   62139 cri.go:89] found id: ""
	I0416 01:02:58.951303   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.951313   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:58.951324   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:58.951341   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:59.004673   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:59.004710   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:59.019393   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:59.019423   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:59.091587   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:59.091612   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:59.091632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:59.169623   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:59.169655   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:01.710597   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:01.724394   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:01.724463   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:01.761577   62139 cri.go:89] found id: ""
	I0416 01:03:01.761605   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.761616   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:01.761624   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:01.761684   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:01.797467   62139 cri.go:89] found id: ""
	I0416 01:03:01.797498   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.797508   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:01.797515   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:01.797582   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:01.839910   62139 cri.go:89] found id: ""
	I0416 01:03:01.839940   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.839950   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:01.839958   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:01.840019   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:01.879572   62139 cri.go:89] found id: ""
	I0416 01:03:01.879599   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.879611   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:01.879617   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:01.879664   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:01.920190   62139 cri.go:89] found id: ""
	I0416 01:03:01.920222   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.920234   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:01.920242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:01.920300   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:01.957389   62139 cri.go:89] found id: ""
	I0416 01:03:01.957418   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.957428   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:01.957436   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:01.957507   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:01.998730   62139 cri.go:89] found id: ""
	I0416 01:03:01.998754   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.998762   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:01.998767   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:01.998812   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:02.036062   62139 cri.go:89] found id: ""
	I0416 01:03:02.036094   62139 logs.go:276] 0 containers: []
	W0416 01:03:02.036103   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:02.036112   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:02.036125   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:02.089109   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:02.089149   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:57.720792   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:00.219899   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:02.220048   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:57.832016   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:00.328867   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:02.330238   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:58.620416   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:01.121496   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:02.103312   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:02.103342   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:02.174034   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:02.174056   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:02.174069   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:02.249526   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:02.249555   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:04.795314   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:04.808294   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:04.808367   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:04.848795   62139 cri.go:89] found id: ""
	I0416 01:03:04.848825   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.848849   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:04.848857   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:04.848928   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:04.886442   62139 cri.go:89] found id: ""
	I0416 01:03:04.886477   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.886488   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:04.886502   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:04.886572   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:04.929183   62139 cri.go:89] found id: ""
	I0416 01:03:04.929215   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.929226   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:04.929234   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:04.929297   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:04.965134   62139 cri.go:89] found id: ""
	I0416 01:03:04.965172   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.965184   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:04.965191   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:04.965247   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:05.001346   62139 cri.go:89] found id: ""
	I0416 01:03:05.001373   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.001381   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:05.001387   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:05.001434   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:05.039181   62139 cri.go:89] found id: ""
	I0416 01:03:05.039210   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.039219   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:05.039224   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:05.039289   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:05.073451   62139 cri.go:89] found id: ""
	I0416 01:03:05.073479   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.073487   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:05.073494   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:05.073555   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:05.108466   62139 cri.go:89] found id: ""
	I0416 01:03:05.108495   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.108510   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:05.108520   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:05.108537   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:05.162725   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:05.162765   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:05.178152   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:05.178183   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:05.255122   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:05.255147   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:05.255161   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:05.331274   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:05.331309   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:04.220320   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:06.220475   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:04.331381   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:06.830143   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:03.620275   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:06.121293   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:07.882980   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:07.896311   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:07.896372   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:07.934632   62139 cri.go:89] found id: ""
	I0416 01:03:07.934661   62139 logs.go:276] 0 containers: []
	W0416 01:03:07.934671   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:07.934677   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:07.934745   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:07.971463   62139 cri.go:89] found id: ""
	I0416 01:03:07.971495   62139 logs.go:276] 0 containers: []
	W0416 01:03:07.971511   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:07.971518   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:07.971581   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:08.006808   62139 cri.go:89] found id: ""
	I0416 01:03:08.006839   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.006847   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:08.006852   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:08.006912   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:08.043051   62139 cri.go:89] found id: ""
	I0416 01:03:08.043082   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.043089   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:08.043095   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:08.043155   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:08.078602   62139 cri.go:89] found id: ""
	I0416 01:03:08.078638   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.078647   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:08.078655   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:08.078724   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:08.115264   62139 cri.go:89] found id: ""
	I0416 01:03:08.115293   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.115303   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:08.115311   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:08.115378   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:08.152782   62139 cri.go:89] found id: ""
	I0416 01:03:08.152814   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.152821   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:08.152826   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:08.152875   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:08.193484   62139 cri.go:89] found id: ""
	I0416 01:03:08.193506   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.193513   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:08.193522   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:08.193532   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:08.248796   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:08.248831   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:08.266054   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:08.266083   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:08.343470   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:08.343501   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:08.343515   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:08.430335   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:08.430383   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:10.972540   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:10.986911   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:10.986984   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:11.024905   62139 cri.go:89] found id: ""
	I0416 01:03:11.024939   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.024951   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:11.024958   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:11.025011   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:11.058629   62139 cri.go:89] found id: ""
	I0416 01:03:11.058654   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.058662   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:11.058667   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:11.058721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:11.093277   62139 cri.go:89] found id: ""
	I0416 01:03:11.093308   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.093317   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:11.093325   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:11.093386   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:11.131883   62139 cri.go:89] found id: ""
	I0416 01:03:11.131912   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.131924   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:11.131934   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:11.132004   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:11.175142   62139 cri.go:89] found id: ""
	I0416 01:03:11.175169   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.175179   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:11.175186   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:11.175236   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:11.209985   62139 cri.go:89] found id: ""
	I0416 01:03:11.210020   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.210031   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:11.210039   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:11.210110   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:11.246086   62139 cri.go:89] found id: ""
	I0416 01:03:11.246119   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.246129   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:11.246137   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:11.246199   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:11.286979   62139 cri.go:89] found id: ""
	I0416 01:03:11.287007   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.287019   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:11.287037   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:11.287051   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:11.364522   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:11.364557   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:11.410343   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:11.410375   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:11.459671   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:11.459703   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:11.476163   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:11.476193   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:11.549544   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:08.220881   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:10.720607   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:09.329882   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:11.330570   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:08.620817   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:11.120789   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:14.050433   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:14.065375   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:14.065431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:14.105548   62139 cri.go:89] found id: ""
	I0416 01:03:14.105571   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.105579   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:14.105583   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:14.105644   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:14.146891   62139 cri.go:89] found id: ""
	I0416 01:03:14.146915   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.146922   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:14.146927   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:14.146972   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:14.183905   62139 cri.go:89] found id: ""
	I0416 01:03:14.183937   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.183948   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:14.183954   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:14.184002   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:14.219878   62139 cri.go:89] found id: ""
	I0416 01:03:14.219905   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.219915   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:14.219922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:14.219978   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:14.256284   62139 cri.go:89] found id: ""
	I0416 01:03:14.256310   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.256317   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:14.256323   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:14.256381   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:14.295932   62139 cri.go:89] found id: ""
	I0416 01:03:14.295958   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.295966   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:14.295971   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:14.296025   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:14.333202   62139 cri.go:89] found id: ""
	I0416 01:03:14.333226   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.333235   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:14.333242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:14.333302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:14.370034   62139 cri.go:89] found id: ""
	I0416 01:03:14.370059   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.370066   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:14.370074   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:14.370092   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:14.424626   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:14.424669   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:14.441842   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:14.441872   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:14.515899   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:14.515926   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:14.515944   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:14.599956   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:14.599991   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:12.720896   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:15.220260   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:13.829944   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:16.328971   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:13.621084   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:16.120767   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:17.157610   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:17.171737   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:17.171800   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:17.214327   62139 cri.go:89] found id: ""
	I0416 01:03:17.214354   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.214364   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:17.214371   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:17.214433   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:17.255896   62139 cri.go:89] found id: ""
	I0416 01:03:17.255924   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.255939   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:17.255946   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:17.256005   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:17.298470   62139 cri.go:89] found id: ""
	I0416 01:03:17.298498   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.298512   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:17.298520   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:17.298580   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:17.338810   62139 cri.go:89] found id: ""
	I0416 01:03:17.338834   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.338842   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:17.338847   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:17.338899   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:17.375980   62139 cri.go:89] found id: ""
	I0416 01:03:17.376012   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.376019   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:17.376024   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:17.376076   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:17.411374   62139 cri.go:89] found id: ""
	I0416 01:03:17.411400   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.411408   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:17.411413   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:17.411463   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:17.452916   62139 cri.go:89] found id: ""
	I0416 01:03:17.452951   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.452962   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:17.452969   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:17.453037   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:17.492459   62139 cri.go:89] found id: ""
	I0416 01:03:17.492489   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.492500   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:17.492512   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:17.492527   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:17.541780   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:17.541814   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:17.558831   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:17.558867   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:17.635332   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:17.635351   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:17.635362   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:17.715778   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:17.715809   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:20.260621   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:20.274721   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:20.274791   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:20.311965   62139 cri.go:89] found id: ""
	I0416 01:03:20.311991   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.312002   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:20.312009   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:20.312069   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:20.350316   62139 cri.go:89] found id: ""
	I0416 01:03:20.350346   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.350356   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:20.350363   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:20.350414   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:20.404666   62139 cri.go:89] found id: ""
	I0416 01:03:20.404692   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.404700   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:20.404705   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:20.404753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:20.441223   62139 cri.go:89] found id: ""
	I0416 01:03:20.441254   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.441267   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:20.441275   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:20.441340   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:20.480535   62139 cri.go:89] found id: ""
	I0416 01:03:20.480596   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.480606   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:20.480613   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:20.480680   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:20.517520   62139 cri.go:89] found id: ""
	I0416 01:03:20.517543   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.517550   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:20.517556   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:20.517614   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:20.556067   62139 cri.go:89] found id: ""
	I0416 01:03:20.556097   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.556107   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:20.556114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:20.556177   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:20.594901   62139 cri.go:89] found id: ""
	I0416 01:03:20.594932   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.594939   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:20.594947   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:20.594958   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:20.673759   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:20.673795   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:20.721407   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:20.721443   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:20.772957   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:20.772989   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:20.787902   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:20.787932   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:20.863445   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:17.721415   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:20.221042   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:18.329421   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:20.329949   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:22.330009   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:18.122678   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:20.621127   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:22.621692   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:23.363637   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:23.377916   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:23.377991   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:23.415642   62139 cri.go:89] found id: ""
	I0416 01:03:23.415671   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.415679   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:23.415685   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:23.415732   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:23.452788   62139 cri.go:89] found id: ""
	I0416 01:03:23.452812   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.452819   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:23.452829   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:23.452878   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:23.488758   62139 cri.go:89] found id: ""
	I0416 01:03:23.488785   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.488794   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:23.488801   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:23.488862   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:23.526542   62139 cri.go:89] found id: ""
	I0416 01:03:23.526574   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.526584   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:23.526592   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:23.526661   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:23.562481   62139 cri.go:89] found id: ""
	I0416 01:03:23.562505   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.562512   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:23.562518   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:23.562579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:23.599119   62139 cri.go:89] found id: ""
	I0416 01:03:23.599145   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.599155   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:23.599162   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:23.599241   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:23.642445   62139 cri.go:89] found id: ""
	I0416 01:03:23.642474   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.642485   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:23.642492   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:23.642557   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:23.678091   62139 cri.go:89] found id: ""
	I0416 01:03:23.678113   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.678121   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:23.678129   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:23.678140   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:23.731668   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:23.731703   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:23.746413   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:23.746444   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:23.821885   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:23.821908   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:23.821923   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:23.901836   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:23.901872   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:26.444935   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:26.459240   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:26.459308   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:26.499208   62139 cri.go:89] found id: ""
	I0416 01:03:26.499237   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.499249   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:26.499256   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:26.499318   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:26.536220   62139 cri.go:89] found id: ""
	I0416 01:03:26.536258   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.536270   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:26.536277   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:26.536342   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:26.576217   62139 cri.go:89] found id: ""
	I0416 01:03:26.576241   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.576249   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:26.576254   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:26.576314   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:26.612343   62139 cri.go:89] found id: ""
	I0416 01:03:26.612369   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.612378   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:26.612385   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:26.612448   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:26.651323   62139 cri.go:89] found id: ""
	I0416 01:03:26.651353   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.651365   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:26.651384   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:26.651453   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:26.688844   62139 cri.go:89] found id: ""
	I0416 01:03:26.688874   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.688885   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:26.688891   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:26.688969   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:26.724362   62139 cri.go:89] found id: ""
	I0416 01:03:26.724387   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.724395   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:26.724401   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:26.724455   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:26.767766   62139 cri.go:89] found id: ""
	I0416 01:03:26.767795   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.767806   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:26.767816   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:26.767837   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:26.788269   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:26.788297   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:26.884802   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:26.884822   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:26.884834   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:26.964007   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:26.964044   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:27.003719   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:27.003745   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:22.720420   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:24.720865   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:26.721369   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:24.828766   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:26.830222   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:25.119674   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:27.620689   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:29.563218   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:29.579014   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:29.579078   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:29.620739   62139 cri.go:89] found id: ""
	I0416 01:03:29.620769   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.620780   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:29.620787   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:29.620850   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:29.658165   62139 cri.go:89] found id: ""
	I0416 01:03:29.658192   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.658199   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:29.658205   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:29.658252   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:29.693893   62139 cri.go:89] found id: ""
	I0416 01:03:29.693921   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.693929   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:29.693935   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:29.693985   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:29.737808   62139 cri.go:89] found id: ""
	I0416 01:03:29.737836   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.737846   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:29.737851   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:29.737910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:29.777382   62139 cri.go:89] found id: ""
	I0416 01:03:29.777408   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.777416   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:29.777422   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:29.777473   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:29.815633   62139 cri.go:89] found id: ""
	I0416 01:03:29.815659   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.815668   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:29.815682   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:29.815743   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:29.858790   62139 cri.go:89] found id: ""
	I0416 01:03:29.858820   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.858831   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:29.858839   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:29.858899   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:29.897085   62139 cri.go:89] found id: ""
	I0416 01:03:29.897120   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.897131   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:29.897142   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:29.897169   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:29.951231   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:29.951266   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:29.965539   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:29.965565   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:30.045138   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:30.045170   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:30.045186   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:30.120575   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:30.120606   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:29.220073   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:31.221145   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:29.328625   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:31.329903   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:29.621401   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:32.120604   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:32.662210   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:32.675833   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:32.675903   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:32.712104   62139 cri.go:89] found id: ""
	I0416 01:03:32.712129   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.712136   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:32.712141   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:32.712198   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:32.749617   62139 cri.go:89] found id: ""
	I0416 01:03:32.749644   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.749652   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:32.749658   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:32.749723   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:32.785069   62139 cri.go:89] found id: ""
	I0416 01:03:32.785100   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.785110   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:32.785116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:32.785191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:32.825871   62139 cri.go:89] found id: ""
	I0416 01:03:32.825912   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.825922   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:32.825928   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:32.826008   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:32.868294   62139 cri.go:89] found id: ""
	I0416 01:03:32.868321   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.868328   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:32.868334   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:32.868401   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:32.907764   62139 cri.go:89] found id: ""
	I0416 01:03:32.907789   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.907796   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:32.907802   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:32.907870   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:32.946112   62139 cri.go:89] found id: ""
	I0416 01:03:32.946137   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.946144   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:32.946155   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:32.946215   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:32.985343   62139 cri.go:89] found id: ""
	I0416 01:03:32.985374   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.985385   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:32.985395   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:32.985415   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:33.063117   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:33.063154   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:33.113739   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:33.113773   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:33.163466   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:33.163508   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:33.178368   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:33.178397   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:33.259509   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:35.760004   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:35.774161   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:35.774237   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:35.812551   62139 cri.go:89] found id: ""
	I0416 01:03:35.812580   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.812589   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:35.812594   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:35.812642   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:35.853134   62139 cri.go:89] found id: ""
	I0416 01:03:35.853177   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.853187   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:35.853195   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:35.853255   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:35.894210   62139 cri.go:89] found id: ""
	I0416 01:03:35.894246   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.894254   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:35.894259   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:35.894330   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:35.928986   62139 cri.go:89] found id: ""
	I0416 01:03:35.929010   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.929019   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:35.929027   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:35.929090   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:35.970688   62139 cri.go:89] found id: ""
	I0416 01:03:35.970712   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.970719   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:35.970725   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:35.970783   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:36.005744   62139 cri.go:89] found id: ""
	I0416 01:03:36.005771   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.005778   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:36.005783   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:36.005829   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:36.044932   62139 cri.go:89] found id: ""
	I0416 01:03:36.044966   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.044977   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:36.044984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:36.045051   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:36.080488   62139 cri.go:89] found id: ""
	I0416 01:03:36.080516   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.080527   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:36.080538   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:36.080552   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:36.132956   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:36.133000   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:36.147070   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:36.147097   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:36.226640   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:36.226670   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:36.226684   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:36.307205   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:36.307249   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:33.221952   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:35.720745   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:33.828768   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:35.830452   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:34.120695   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:36.619511   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:38.849685   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:38.863817   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:38.863897   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:38.902418   62139 cri.go:89] found id: ""
	I0416 01:03:38.902445   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.902455   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:38.902462   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:38.902533   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:38.937811   62139 cri.go:89] found id: ""
	I0416 01:03:38.937838   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.937845   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:38.937850   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:38.937900   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:38.972380   62139 cri.go:89] found id: ""
	I0416 01:03:38.972403   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.972411   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:38.972416   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:38.972466   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:39.007572   62139 cri.go:89] found id: ""
	I0416 01:03:39.007595   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.007603   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:39.007608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:39.007651   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:39.049355   62139 cri.go:89] found id: ""
	I0416 01:03:39.049382   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.049391   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:39.049398   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:39.049459   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:39.084535   62139 cri.go:89] found id: ""
	I0416 01:03:39.084565   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.084574   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:39.084581   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:39.084645   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:39.125027   62139 cri.go:89] found id: ""
	I0416 01:03:39.125055   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.125073   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:39.125080   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:39.125136   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:39.164506   62139 cri.go:89] found id: ""
	I0416 01:03:39.164537   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.164547   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:39.164557   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:39.164573   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:39.203447   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:39.203483   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:39.259087   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:39.259122   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:39.273611   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:39.273637   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:39.352372   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:39.352392   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:39.352407   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:41.938575   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:41.952937   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:41.953019   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:41.990771   62139 cri.go:89] found id: ""
	I0416 01:03:41.990802   62139 logs.go:276] 0 containers: []
	W0416 01:03:41.990811   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:41.990819   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:41.990881   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:42.027338   62139 cri.go:89] found id: ""
	I0416 01:03:42.027367   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.027374   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:42.027379   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:42.027431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:42.068348   62139 cri.go:89] found id: ""
	I0416 01:03:42.068377   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.068387   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:42.068394   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:42.068457   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:38.220198   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:40.220481   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:42.221383   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:38.330729   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:40.831615   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:38.620021   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:40.620641   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:42.620702   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:42.108157   62139 cri.go:89] found id: ""
	I0416 01:03:42.108181   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.108187   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:42.108193   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:42.108244   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:42.149749   62139 cri.go:89] found id: ""
	I0416 01:03:42.149770   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.149777   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:42.149784   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:42.149848   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:42.185322   62139 cri.go:89] found id: ""
	I0416 01:03:42.185349   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.185360   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:42.185368   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:42.185435   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:42.224334   62139 cri.go:89] found id: ""
	I0416 01:03:42.224359   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.224370   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:42.224376   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:42.224435   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:42.263466   62139 cri.go:89] found id: ""
	I0416 01:03:42.263494   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.263502   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:42.263509   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:42.263522   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:42.315106   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:42.315139   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:42.329394   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:42.329425   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:42.405267   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:42.405305   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:42.405321   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:42.486126   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:42.486168   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:45.027718   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:45.042387   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:45.042453   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:45.080790   62139 cri.go:89] found id: ""
	I0416 01:03:45.080814   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.080823   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:45.080829   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:45.080875   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:45.121278   62139 cri.go:89] found id: ""
	I0416 01:03:45.121306   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.121317   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:45.121324   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:45.121383   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:45.158076   62139 cri.go:89] found id: ""
	I0416 01:03:45.158099   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.158107   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:45.158116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:45.158162   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:45.195577   62139 cri.go:89] found id: ""
	I0416 01:03:45.195608   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.195619   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:45.195627   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:45.195685   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:45.239230   62139 cri.go:89] found id: ""
	I0416 01:03:45.239257   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.239267   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:45.239275   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:45.239326   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:45.279193   62139 cri.go:89] found id: ""
	I0416 01:03:45.279220   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.279227   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:45.279232   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:45.279280   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:45.314876   62139 cri.go:89] found id: ""
	I0416 01:03:45.314908   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.314916   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:45.314922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:45.314970   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:45.351699   62139 cri.go:89] found id: ""
	I0416 01:03:45.351723   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.351730   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:45.351738   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:45.351750   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:45.392681   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:45.392708   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:45.446564   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:45.446605   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:45.460541   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:45.460564   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:45.535287   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:45.535319   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:45.535334   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:44.720088   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:46.721511   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:43.329413   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:45.330644   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:45.123357   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:47.621806   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:48.117476   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:48.133341   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:48.133402   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:48.171230   62139 cri.go:89] found id: ""
	I0416 01:03:48.171263   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.171273   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:48.171280   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:48.171337   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:48.206188   62139 cri.go:89] found id: ""
	I0416 01:03:48.206218   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.206229   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:48.206236   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:48.206294   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:48.242349   62139 cri.go:89] found id: ""
	I0416 01:03:48.242377   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.242384   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:48.242389   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:48.242437   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:48.278324   62139 cri.go:89] found id: ""
	I0416 01:03:48.278347   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.278355   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:48.278360   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:48.278406   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:48.315727   62139 cri.go:89] found id: ""
	I0416 01:03:48.315753   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.315763   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:48.315770   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:48.315828   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:48.354146   62139 cri.go:89] found id: ""
	I0416 01:03:48.354169   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.354176   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:48.354182   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:48.354242   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:48.393951   62139 cri.go:89] found id: ""
	I0416 01:03:48.393989   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.394000   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:48.394007   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:48.394081   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:48.431849   62139 cri.go:89] found id: ""
	I0416 01:03:48.431887   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.431895   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:48.431903   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:48.431917   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:48.446210   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:48.446242   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:48.517459   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:48.517485   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:48.517500   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:48.596320   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:48.596356   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:48.639700   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:48.639733   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:51.197396   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:51.211803   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:51.211889   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:51.250768   62139 cri.go:89] found id: ""
	I0416 01:03:51.250793   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.250802   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:51.250810   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:51.250872   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:51.291389   62139 cri.go:89] found id: ""
	I0416 01:03:51.291415   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.291421   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:51.291429   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:51.291478   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:51.332466   62139 cri.go:89] found id: ""
	I0416 01:03:51.332490   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.332499   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:51.332504   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:51.332549   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:51.367731   62139 cri.go:89] found id: ""
	I0416 01:03:51.367759   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.367767   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:51.367773   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:51.367829   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:51.400567   62139 cri.go:89] found id: ""
	I0416 01:03:51.400599   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.400609   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:51.400616   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:51.400679   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:51.433561   62139 cri.go:89] found id: ""
	I0416 01:03:51.433590   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.433598   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:51.433608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:51.433666   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:51.469136   62139 cri.go:89] found id: ""
	I0416 01:03:51.469179   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.469189   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:51.469196   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:51.469255   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:51.504410   62139 cri.go:89] found id: ""
	I0416 01:03:51.504442   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.504452   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:51.504462   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:51.504480   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:51.557420   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:51.557449   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:51.571481   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:51.571506   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:51.648722   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:51.648744   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:51.648755   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:51.728945   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:51.728978   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:49.221614   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:51.721798   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:47.829985   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:50.329419   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:52.329909   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:49.622776   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:52.120080   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:54.272503   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:54.286573   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:54.286646   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:54.321084   62139 cri.go:89] found id: ""
	I0416 01:03:54.321115   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.321125   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:54.321133   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:54.321208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:54.366333   62139 cri.go:89] found id: ""
	I0416 01:03:54.366364   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.366374   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:54.366380   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:54.366437   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:54.406267   62139 cri.go:89] found id: ""
	I0416 01:03:54.406317   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.406328   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:54.406336   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:54.406405   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:54.446853   62139 cri.go:89] found id: ""
	I0416 01:03:54.446883   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.446894   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:54.446901   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:54.446956   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:54.487658   62139 cri.go:89] found id: ""
	I0416 01:03:54.487683   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.487690   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:54.487696   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:54.487753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:54.530189   62139 cri.go:89] found id: ""
	I0416 01:03:54.530216   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.530226   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:54.530232   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:54.530289   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:54.571317   62139 cri.go:89] found id: ""
	I0416 01:03:54.571341   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.571349   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:54.571354   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:54.571416   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:54.612432   62139 cri.go:89] found id: ""
	I0416 01:03:54.612458   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.612467   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:54.612478   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:54.612493   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:54.666599   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:54.666629   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:54.680880   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:54.680915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:54.757365   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:54.757386   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:54.757398   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:54.834436   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:54.834468   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:54.219690   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:56.220753   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:54.332950   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:56.830167   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:54.621002   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:56.622452   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:57.405516   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:57.420694   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:57.420773   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:57.460338   62139 cri.go:89] found id: ""
	I0416 01:03:57.460367   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.460374   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:57.460381   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:57.460442   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:57.498121   62139 cri.go:89] found id: ""
	I0416 01:03:57.498150   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.498160   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:57.498167   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:57.498228   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:57.536959   62139 cri.go:89] found id: ""
	I0416 01:03:57.536989   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.537005   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:57.537014   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:57.537077   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:57.575633   62139 cri.go:89] found id: ""
	I0416 01:03:57.575662   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.575673   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:57.575680   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:57.575743   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:57.614459   62139 cri.go:89] found id: ""
	I0416 01:03:57.614491   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.614501   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:57.614509   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:57.614568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:57.657078   62139 cri.go:89] found id: ""
	I0416 01:03:57.657109   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.657120   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:57.657127   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:57.657204   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:57.693882   62139 cri.go:89] found id: ""
	I0416 01:03:57.693904   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.693911   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:57.693922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:57.693969   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:57.731283   62139 cri.go:89] found id: ""
	I0416 01:03:57.731312   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.731320   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:57.731327   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:57.731338   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:57.782618   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:57.782656   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:57.796763   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:57.796794   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:57.869629   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:57.869652   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:57.869665   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:57.948859   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:57.948892   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:00.487682   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:00.501095   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:00.501182   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:00.537902   62139 cri.go:89] found id: ""
	I0416 01:04:00.537931   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.537939   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:00.537945   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:00.537994   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:00.574164   62139 cri.go:89] found id: ""
	I0416 01:04:00.574203   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.574214   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:00.574222   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:00.574287   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:00.629592   62139 cri.go:89] found id: ""
	I0416 01:04:00.629615   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.629622   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:00.629627   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:00.629679   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:00.672102   62139 cri.go:89] found id: ""
	I0416 01:04:00.672127   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.672134   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:00.672141   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:00.672201   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:00.715040   62139 cri.go:89] found id: ""
	I0416 01:04:00.715064   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.715072   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:00.715078   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:00.715139   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:00.751113   62139 cri.go:89] found id: ""
	I0416 01:04:00.751137   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.751146   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:00.751152   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:00.751204   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:00.787613   62139 cri.go:89] found id: ""
	I0416 01:04:00.787644   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.787653   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:00.787660   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:00.787721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:00.824244   62139 cri.go:89] found id: ""
	I0416 01:04:00.824271   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.824280   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:00.824291   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:00.824304   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:00.899977   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:00.900014   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:00.900029   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:00.982317   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:00.982350   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:01.026354   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:01.026393   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:01.080393   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:01.080441   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:58.720894   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:00.720961   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:59.329460   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:01.330171   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:59.119259   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:01.619026   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:03.595966   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:03.609190   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:03.609253   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:03.647151   62139 cri.go:89] found id: ""
	I0416 01:04:03.647183   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.647197   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:03.647203   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:03.647250   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:03.685211   62139 cri.go:89] found id: ""
	I0416 01:04:03.685239   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.685248   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:03.685254   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:03.685303   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:03.720928   62139 cri.go:89] found id: ""
	I0416 01:04:03.720949   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.720956   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:03.720961   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:03.721035   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:03.759179   62139 cri.go:89] found id: ""
	I0416 01:04:03.759210   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.759220   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:03.759228   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:03.759290   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:03.795670   62139 cri.go:89] found id: ""
	I0416 01:04:03.795700   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.795710   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:03.795717   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:03.795785   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:03.832944   62139 cri.go:89] found id: ""
	I0416 01:04:03.832971   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.832980   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:03.832988   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:03.833053   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:03.869211   62139 cri.go:89] found id: ""
	I0416 01:04:03.869238   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.869248   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:03.869256   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:03.869317   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:03.905859   62139 cri.go:89] found id: ""
	I0416 01:04:03.905888   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.905896   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:03.905904   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:03.905915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:03.957057   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:03.957088   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:03.972309   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:03.972344   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:04.049927   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:04.049950   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:04.049965   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:04.136395   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:04.136435   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:06.676667   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:06.690062   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:06.690125   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:06.733734   62139 cri.go:89] found id: ""
	I0416 01:04:06.733758   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.733773   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:06.733782   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:06.733835   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:06.773112   62139 cri.go:89] found id: ""
	I0416 01:04:06.773140   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.773147   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:06.773152   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:06.773231   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:06.812786   62139 cri.go:89] found id: ""
	I0416 01:04:06.812809   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.812817   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:06.812822   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:06.812870   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:06.853995   62139 cri.go:89] found id: ""
	I0416 01:04:06.854022   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.854029   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:06.854034   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:06.854088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:06.893809   62139 cri.go:89] found id: ""
	I0416 01:04:06.893841   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.893848   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:06.893853   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:06.893909   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:06.929389   62139 cri.go:89] found id: ""
	I0416 01:04:06.929419   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.929430   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:06.929437   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:06.929518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:06.968278   62139 cri.go:89] found id: ""
	I0416 01:04:06.968303   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.968311   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:06.968316   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:06.968364   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:07.018932   62139 cri.go:89] found id: ""
	I0416 01:04:07.018965   62139 logs.go:276] 0 containers: []
	W0416 01:04:07.018976   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:07.018989   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:07.019003   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:07.083611   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:07.083645   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:03.220314   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:05.720941   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:03.830050   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:06.329416   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:03.619482   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:05.620393   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:07.110126   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:07.110152   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:07.186262   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:07.186290   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:07.186305   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:07.263139   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:07.263170   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:09.807489   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:09.822045   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:09.822110   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:09.867444   62139 cri.go:89] found id: ""
	I0416 01:04:09.867469   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.867480   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:09.867487   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:09.867538   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:09.904280   62139 cri.go:89] found id: ""
	I0416 01:04:09.904312   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.904323   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:09.904330   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:09.904389   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:09.941066   62139 cri.go:89] found id: ""
	I0416 01:04:09.941091   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.941099   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:09.941107   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:09.941189   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:09.975739   62139 cri.go:89] found id: ""
	I0416 01:04:09.975767   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.975777   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:09.975785   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:09.975844   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:10.011414   62139 cri.go:89] found id: ""
	I0416 01:04:10.011444   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.011454   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:10.011461   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:10.011528   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:10.045670   62139 cri.go:89] found id: ""
	I0416 01:04:10.045695   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.045704   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:10.045711   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:10.045777   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:10.082320   62139 cri.go:89] found id: ""
	I0416 01:04:10.082352   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.082361   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:10.082368   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:10.082428   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:10.120453   62139 cri.go:89] found id: ""
	I0416 01:04:10.120482   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.120492   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:10.120501   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:10.120515   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:10.200213   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:10.200251   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:10.251709   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:10.251742   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:10.307348   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:10.307382   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:10.321293   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:10.321319   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:10.401361   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:08.220488   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:10.221408   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:08.331985   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:10.829244   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:08.119800   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:10.121093   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:12.126420   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:12.901763   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:12.916308   62139 kubeadm.go:591] duration metric: took 4m4.703830076s to restartPrimaryControlPlane
	W0416 01:04:12.916384   62139 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:04:12.916416   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:04:12.720462   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:14.721516   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:17.220364   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:12.830409   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:15.330184   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:14.620714   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:16.622203   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:17.897436   62139 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.980993606s)
	I0416 01:04:17.897592   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:04:17.914655   62139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:04:17.927482   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:04:17.940210   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:04:17.940233   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:04:17.940274   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:04:17.951037   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:04:17.951106   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:04:17.962341   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:04:17.972436   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:04:17.972500   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:04:17.983198   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:04:17.992856   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:04:17.992912   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:04:18.003122   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:04:18.014064   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:04:18.014117   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:04:18.024854   62139 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:04:18.101381   62139 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 01:04:18.101436   62139 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:04:18.246529   62139 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:04:18.246687   62139 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:04:18.246802   62139 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:04:18.456847   62139 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:04:18.458980   62139 out.go:204]   - Generating certificates and keys ...
	I0416 01:04:18.459096   62139 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:04:18.459190   62139 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:04:18.459294   62139 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:04:18.459381   62139 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:04:18.459473   62139 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:04:18.459548   62139 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:04:18.459631   62139 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:04:18.459721   62139 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:04:18.459822   62139 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:04:18.460281   62139 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:04:18.460387   62139 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:04:18.460475   62139 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:04:18.564910   62139 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:04:18.806406   62139 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:04:18.890124   62139 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:04:19.046415   62139 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:04:19.063159   62139 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:04:19.063301   62139 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:04:19.063415   62139 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:04:19.229066   62139 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:04:19.231110   62139 out.go:204]   - Booting up control plane ...
	I0416 01:04:19.231246   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:04:19.248833   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:04:19.250340   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:04:19.251664   62139 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:04:19.254678   62139 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:04:19.221976   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:21.720239   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:17.830011   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:18.323271   61500 pod_ready.go:81] duration metric: took 4m0.000449424s for pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace to be "Ready" ...
	E0416 01:04:18.323300   61500 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0416 01:04:18.323318   61500 pod_ready.go:38] duration metric: took 4m9.009725319s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:04:18.323357   61500 kubeadm.go:591] duration metric: took 4m19.656264138s to restartPrimaryControlPlane
	W0416 01:04:18.323420   61500 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:04:18.323449   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:04:19.122802   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:21.621389   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:24.227649   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:26.720896   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:24.119577   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:26.620166   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:29.219937   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:31.220697   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:28.622399   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:31.119279   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:33.221240   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:35.221536   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:33.124909   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:35.620718   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:37.720528   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:40.220531   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:38.120415   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:40.121126   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:42.620161   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:42.719946   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:44.720203   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:47.219782   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:44.620806   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:47.119479   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:47.613243   62747 pod_ready.go:81] duration metric: took 4m0.000098534s for pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace to be "Ready" ...
	E0416 01:04:47.613279   62747 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0416 01:04:47.613297   62747 pod_ready.go:38] duration metric: took 4m12.544704519s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:04:47.613327   62747 kubeadm.go:591] duration metric: took 4m20.76891948s to restartPrimaryControlPlane
	W0416 01:04:47.613387   62747 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:04:47.613410   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:04:50.224993   61500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.901526458s)
	I0416 01:04:50.225057   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:04:50.241083   61500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:04:50.252468   61500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:04:50.263721   61500 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:04:50.263744   61500 kubeadm.go:156] found existing configuration files:
	
	I0416 01:04:50.263786   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:04:50.274550   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:04:50.274620   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:04:50.285019   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:04:50.295079   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:04:50.295151   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:04:50.306424   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:04:50.317221   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:04:50.317286   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:04:50.327783   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:04:50.338144   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:04:50.338213   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:04:50.349262   61500 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:04:50.410467   61500 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.2
	I0416 01:04:50.410597   61500 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:04:50.565288   61500 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:04:50.565442   61500 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:04:50.565580   61500 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:04:50.783173   61500 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:04:50.785219   61500 out.go:204]   - Generating certificates and keys ...
	I0416 01:04:50.785339   61500 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:04:50.785427   61500 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:04:50.785526   61500 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:04:50.785620   61500 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:04:50.785745   61500 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:04:50.785847   61500 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:04:50.785951   61500 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:04:50.786037   61500 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:04:50.786156   61500 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:04:50.786279   61500 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:04:50.786341   61500 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:04:50.786425   61500 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:04:50.868738   61500 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:04:51.024628   61500 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 01:04:51.304801   61500 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:04:51.485803   61500 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:04:51.614330   61500 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:04:51.615043   61500 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:04:51.617465   61500 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:04:49.720594   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:51.721464   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:51.619398   61500 out.go:204]   - Booting up control plane ...
	I0416 01:04:51.619519   61500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:04:51.619637   61500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:04:51.619717   61500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:04:51.640756   61500 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:04:51.643264   61500 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:04:51.643617   61500 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:04:51.796506   61500 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0416 01:04:51.796640   61500 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0416 01:04:54.220965   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:56.222571   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:52.798698   61500 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002359416s
	I0416 01:04:52.798798   61500 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0416 01:04:57.802689   61500 kubeadm.go:309] [api-check] The API server is healthy after 5.003967397s
	I0416 01:04:57.816580   61500 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 01:04:57.840465   61500 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 01:04:57.879611   61500 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 01:04:57.879906   61500 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-572602 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 01:04:57.895211   61500 kubeadm.go:309] [bootstrap-token] Using token: w1qt2t.vu77oqcsegb1grvk
	I0416 01:04:57.896829   61500 out.go:204]   - Configuring RBAC rules ...
	I0416 01:04:57.896958   61500 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 01:04:57.905289   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 01:04:57.916967   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 01:04:57.922660   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 01:04:57.926143   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 01:04:57.935222   61500 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 01:04:58.215180   61500 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 01:04:58.656120   61500 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 01:04:59.209811   61500 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 01:04:59.211274   61500 kubeadm.go:309] 
	I0416 01:04:59.211354   61500 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 01:04:59.211390   61500 kubeadm.go:309] 
	I0416 01:04:59.211489   61500 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 01:04:59.211512   61500 kubeadm.go:309] 
	I0416 01:04:59.211556   61500 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 01:04:59.211626   61500 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 01:04:59.211695   61500 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 01:04:59.211707   61500 kubeadm.go:309] 
	I0416 01:04:59.211779   61500 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 01:04:59.211789   61500 kubeadm.go:309] 
	I0416 01:04:59.211853   61500 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 01:04:59.211921   61500 kubeadm.go:309] 
	I0416 01:04:59.212030   61500 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 01:04:59.212165   61500 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 01:04:59.212269   61500 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 01:04:59.212280   61500 kubeadm.go:309] 
	I0416 01:04:59.212407   61500 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 01:04:59.212516   61500 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 01:04:59.212525   61500 kubeadm.go:309] 
	I0416 01:04:59.212656   61500 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token w1qt2t.vu77oqcsegb1grvk \
	I0416 01:04:59.212835   61500 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0416 01:04:59.212880   61500 kubeadm.go:309] 	--control-plane 
	I0416 01:04:59.212894   61500 kubeadm.go:309] 
	I0416 01:04:59.212996   61500 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 01:04:59.213007   61500 kubeadm.go:309] 
	I0416 01:04:59.213111   61500 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token w1qt2t.vu77oqcsegb1grvk \
	I0416 01:04:59.213278   61500 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0416 01:04:59.213435   61500 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:04:59.213460   61500 cni.go:84] Creating CNI manager for ""
	I0416 01:04:59.213477   61500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:04:59.215397   61500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:04:59.255478   62139 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 01:04:59.256524   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:04:59.256807   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:04:58.720339   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:01.220968   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:59.216764   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:04:59.230134   61500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:04:59.250739   61500 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 01:04:59.250773   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:04:59.250775   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-572602 minikube.k8s.io/updated_at=2024_04_16T01_04_59_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=no-preload-572602 minikube.k8s.io/primary=true
	I0416 01:04:59.462907   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:04:59.462915   61500 ops.go:34] apiserver oom_adj: -16
	I0416 01:04:59.962977   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:00.463142   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:00.963871   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:01.463866   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:01.963356   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:02.463729   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:04.257472   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:04.257756   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:03.720762   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:05.721421   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:02.963816   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:03.463370   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:03.963655   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:04.463681   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:04.963387   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:05.462926   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:05.963659   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:06.463091   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:06.963504   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:07.463783   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:07.963037   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:08.463212   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:08.963443   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:09.463179   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:09.963188   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:10.463264   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:10.963863   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:11.463051   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:11.591367   61500 kubeadm.go:1107] duration metric: took 12.340665724s to wait for elevateKubeSystemPrivileges
	W0416 01:05:11.591410   61500 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:05:11.591425   61500 kubeadm.go:393] duration metric: took 5m12.980123227s to StartCluster
	I0416 01:05:11.591451   61500 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:11.591559   61500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:05:11.593498   61500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:11.593838   61500 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:05:11.595572   61500 out.go:177] * Verifying Kubernetes components...
	I0416 01:05:11.593961   61500 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:05:11.594060   61500 config.go:182] Loaded profile config "no-preload-572602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 01:05:11.597038   61500 addons.go:69] Setting default-storageclass=true in profile "no-preload-572602"
	I0416 01:05:11.597047   61500 addons.go:69] Setting metrics-server=true in profile "no-preload-572602"
	I0416 01:05:11.597077   61500 addons.go:234] Setting addon metrics-server=true in "no-preload-572602"
	I0416 01:05:11.597081   61500 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-572602"
	W0416 01:05:11.597084   61500 addons.go:243] addon metrics-server should already be in state true
	I0416 01:05:11.597168   61500 host.go:66] Checking if "no-preload-572602" exists ...
	I0416 01:05:11.597042   61500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:05:11.597038   61500 addons.go:69] Setting storage-provisioner=true in profile "no-preload-572602"
	I0416 01:05:11.597274   61500 addons.go:234] Setting addon storage-provisioner=true in "no-preload-572602"
	W0416 01:05:11.597281   61500 addons.go:243] addon storage-provisioner should already be in state true
	I0416 01:05:11.597300   61500 host.go:66] Checking if "no-preload-572602" exists ...
	I0416 01:05:11.597516   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.597563   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.597590   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.597621   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.597621   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.597684   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.617344   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
	I0416 01:05:11.617833   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46345
	I0416 01:05:11.617853   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.618040   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I0416 01:05:11.618170   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.618385   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.618539   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.618564   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.618682   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.618708   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.618786   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.618806   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.619020   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.619035   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.619145   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.619371   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.619629   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.619663   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.619683   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.619715   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.622758   61500 addons.go:234] Setting addon default-storageclass=true in "no-preload-572602"
	W0416 01:05:11.622784   61500 addons.go:243] addon default-storageclass should already be in state true
	I0416 01:05:11.622814   61500 host.go:66] Checking if "no-preload-572602" exists ...
	I0416 01:05:11.623148   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.623182   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.640851   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44015
	I0416 01:05:11.641427   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.642008   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.642028   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.642429   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.642635   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.643204   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I0416 01:05:11.643239   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38953
	I0416 01:05:11.643578   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.643673   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.644133   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.644150   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.644398   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.644409   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.644508   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.644786   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.644823   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.645630   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 01:05:11.645797   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.645824   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.648522   61500 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 01:05:11.646649   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 01:05:11.650173   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 01:05:11.650185   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 01:05:11.650206   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 01:05:11.652524   61500 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:05:07.721798   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:08.214615   61267 pod_ready.go:81] duration metric: took 4m0.001005317s for pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace to be "Ready" ...
	E0416 01:05:08.214650   61267 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0416 01:05:08.214688   61267 pod_ready.go:38] duration metric: took 4m14.521894608s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:08.214750   61267 kubeadm.go:591] duration metric: took 4m22.563492336s to restartPrimaryControlPlane
	W0416 01:05:08.214821   61267 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:05:08.214857   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:05:11.654173   61500 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:11.654189   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:05:11.654207   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 01:05:11.654021   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.654488   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 01:05:11.654524   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.654823   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 01:05:11.655016   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 01:05:11.655159   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 01:05:11.655331   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 01:05:11.657706   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.658193   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 01:05:11.658214   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.658388   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 01:05:11.658585   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 01:05:11.658761   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 01:05:11.658937   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 01:05:11.669485   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34717
	I0416 01:05:11.669878   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.670340   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.670352   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.670714   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.670887   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.672571   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 01:05:11.672888   61500 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:11.672900   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:05:11.672912   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 01:05:11.675816   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.676163   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 01:05:11.676182   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.676335   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 01:05:11.676513   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 01:05:11.676657   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 01:05:11.676799   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 01:05:11.822229   61500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:05:11.850495   61500 node_ready.go:35] waiting up to 6m0s for node "no-preload-572602" to be "Ready" ...
	I0416 01:05:11.868828   61500 node_ready.go:49] node "no-preload-572602" has status "Ready":"True"
	I0416 01:05:11.868852   61500 node_ready.go:38] duration metric: took 18.327813ms for node "no-preload-572602" to be "Ready" ...
	I0416 01:05:11.868860   61500 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:11.877018   61500 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.884190   61500 pod_ready.go:92] pod "etcd-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:11.884221   61500 pod_ready.go:81] duration metric: took 7.173699ms for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.884234   61500 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.901639   61500 pod_ready.go:92] pod "kube-apiserver-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:11.901672   61500 pod_ready.go:81] duration metric: took 17.430111ms for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.901684   61500 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.911839   61500 pod_ready.go:92] pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:11.911871   61500 pod_ready.go:81] duration metric: took 10.178219ms for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.911885   61500 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.936265   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 01:05:11.936293   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 01:05:11.939406   61500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:11.942233   61500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:11.963094   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 01:05:11.963123   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 01:05:12.027316   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:12.027341   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 01:05:12.150413   61500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:12.387284   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.387310   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.387640   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.387665   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.387674   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.387682   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.387973   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.387991   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.395148   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.395179   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.395459   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:12.395488   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.395508   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.930331   61500 pod_ready.go:92] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:12.930362   61500 pod_ready.go:81] duration metric: took 1.01846846s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:12.930373   61500 pod_ready.go:38] duration metric: took 1.061502471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:12.930390   61500 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:05:12.930454   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:05:12.990840   61500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.048571147s)
	I0416 01:05:12.990905   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.990919   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.991246   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:12.991309   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.991323   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.991380   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.991391   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.991617   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:12.991669   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.991690   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:13.719959   61500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.569495387s)
	I0416 01:05:13.720018   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:13.720023   61500 api_server.go:72] duration metric: took 2.12614679s to wait for apiserver process to appear ...
	I0416 01:05:13.720046   61500 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:05:13.720066   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:05:13.720034   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:13.720435   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:13.720458   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:13.720469   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:13.720472   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:13.720477   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:13.720670   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:13.720681   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:13.720691   61500 addons.go:470] Verifying addon metrics-server=true in "no-preload-572602"
	I0416 01:05:13.722348   61500 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0416 01:05:13.723686   61500 addons.go:505] duration metric: took 2.129734353s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0416 01:05:13.764481   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 200:
	ok
	I0416 01:05:13.771661   61500 api_server.go:141] control plane version: v1.30.0-rc.2
	I0416 01:05:13.771690   61500 api_server.go:131] duration metric: took 51.637739ms to wait for apiserver health ...
	I0416 01:05:13.771698   61500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:05:13.812701   61500 system_pods.go:59] 9 kube-system pods found
	I0416 01:05:13.812744   61500 system_pods.go:61] "coredns-7db6d8ff4d-2b5ht" [b8d48a4c-6efd-409a-98be-3ec5bf639470] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.812753   61500 system_pods.go:61] "coredns-7db6d8ff4d-p62sn" [36768eb2-2a22-48e1-b271-f262aa64e014] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.812761   61500 system_pods.go:61] "etcd-no-preload-572602" [c9ed4f86-07f3-48d6-948c-8c4243920512] Running
	I0416 01:05:13.812765   61500 system_pods.go:61] "kube-apiserver-no-preload-572602" [a92513a3-4129-41a2-a603-4a69f4e72041] Running
	I0416 01:05:13.812768   61500 system_pods.go:61] "kube-controller-manager-no-preload-572602" [ce013e5b-5d3c-42de-8a00-c7041288740b] Running
	I0416 01:05:13.812774   61500 system_pods.go:61] "kube-proxy-6cjlc" [2c4d9303-8c08-4385-a6b9-63dda0d9a274] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0416 01:05:13.812777   61500 system_pods.go:61] "kube-scheduler-no-preload-572602" [a9f71ca2-f211-4e6d-9940-4e0af5d4287e] Running
	I0416 01:05:13.812783   61500 system_pods.go:61] "metrics-server-569cc877fc-5j5rc" [3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:13.812792   61500 system_pods.go:61] "storage-provisioner" [b9ac9c93-0e50-4598-a9c4-a12e4ff14063] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:13.812802   61500 system_pods.go:74] duration metric: took 41.098881ms to wait for pod list to return data ...
	I0416 01:05:13.812811   61500 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:05:13.847288   61500 default_sa.go:45] found service account: "default"
	I0416 01:05:13.847323   61500 default_sa.go:55] duration metric: took 34.500938ms for default service account to be created ...
	I0416 01:05:13.847335   61500 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:05:13.877107   61500 system_pods.go:86] 9 kube-system pods found
	I0416 01:05:13.877150   61500 system_pods.go:89] "coredns-7db6d8ff4d-2b5ht" [b8d48a4c-6efd-409a-98be-3ec5bf639470] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.877175   61500 system_pods.go:89] "coredns-7db6d8ff4d-p62sn" [36768eb2-2a22-48e1-b271-f262aa64e014] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.877185   61500 system_pods.go:89] "etcd-no-preload-572602" [c9ed4f86-07f3-48d6-948c-8c4243920512] Running
	I0416 01:05:13.877194   61500 system_pods.go:89] "kube-apiserver-no-preload-572602" [a92513a3-4129-41a2-a603-4a69f4e72041] Running
	I0416 01:05:13.877200   61500 system_pods.go:89] "kube-controller-manager-no-preload-572602" [ce013e5b-5d3c-42de-8a00-c7041288740b] Running
	I0416 01:05:13.877209   61500 system_pods.go:89] "kube-proxy-6cjlc" [2c4d9303-8c08-4385-a6b9-63dda0d9a274] Running
	I0416 01:05:13.877215   61500 system_pods.go:89] "kube-scheduler-no-preload-572602" [a9f71ca2-f211-4e6d-9940-4e0af5d4287e] Running
	I0416 01:05:13.877224   61500 system_pods.go:89] "metrics-server-569cc877fc-5j5rc" [3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:13.877237   61500 system_pods.go:89] "storage-provisioner" [b9ac9c93-0e50-4598-a9c4-a12e4ff14063] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:13.877257   61500 retry.go:31] will retry after 239.706522ms: missing components: kube-dns
	I0416 01:05:14.128770   61500 system_pods.go:86] 9 kube-system pods found
	I0416 01:05:14.128814   61500 system_pods.go:89] "coredns-7db6d8ff4d-2b5ht" [b8d48a4c-6efd-409a-98be-3ec5bf639470] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:14.128827   61500 system_pods.go:89] "coredns-7db6d8ff4d-p62sn" [36768eb2-2a22-48e1-b271-f262aa64e014] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:14.128836   61500 system_pods.go:89] "etcd-no-preload-572602" [c9ed4f86-07f3-48d6-948c-8c4243920512] Running
	I0416 01:05:14.128850   61500 system_pods.go:89] "kube-apiserver-no-preload-572602" [a92513a3-4129-41a2-a603-4a69f4e72041] Running
	I0416 01:05:14.128857   61500 system_pods.go:89] "kube-controller-manager-no-preload-572602" [ce013e5b-5d3c-42de-8a00-c7041288740b] Running
	I0416 01:05:14.128864   61500 system_pods.go:89] "kube-proxy-6cjlc" [2c4d9303-8c08-4385-a6b9-63dda0d9a274] Running
	I0416 01:05:14.128871   61500 system_pods.go:89] "kube-scheduler-no-preload-572602" [a9f71ca2-f211-4e6d-9940-4e0af5d4287e] Running
	I0416 01:05:14.128885   61500 system_pods.go:89] "metrics-server-569cc877fc-5j5rc" [3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:14.128893   61500 system_pods.go:89] "storage-provisioner" [b9ac9c93-0e50-4598-a9c4-a12e4ff14063] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:14.128903   61500 system_pods.go:126] duration metric: took 281.561287ms to wait for k8s-apps to be running ...
	I0416 01:05:14.128912   61500 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:05:14.128978   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:14.145557   61500 system_svc.go:56] duration metric: took 16.639555ms WaitForService to wait for kubelet
	I0416 01:05:14.145582   61500 kubeadm.go:576] duration metric: took 2.551711031s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:05:14.145605   61500 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:05:14.149984   61500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:05:14.150009   61500 node_conditions.go:123] node cpu capacity is 2
	I0416 01:05:14.150021   61500 node_conditions.go:105] duration metric: took 4.410684ms to run NodePressure ...
	I0416 01:05:14.150034   61500 start.go:240] waiting for startup goroutines ...
	I0416 01:05:14.150044   61500 start.go:245] waiting for cluster config update ...
	I0416 01:05:14.150064   61500 start.go:254] writing updated cluster config ...
	I0416 01:05:14.150354   61500 ssh_runner.go:195] Run: rm -f paused
	I0416 01:05:14.198605   61500 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.2 (minor skew: 1)
	I0416 01:05:14.200584   61500 out.go:177] * Done! kubectl is now configured to use "no-preload-572602" cluster and "default" namespace by default
	I0416 01:05:14.258629   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:14.258807   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:19.748784   62747 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.135339447s)
	I0416 01:05:19.748866   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:19.766280   62747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:05:19.777541   62747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:05:19.788086   62747 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:05:19.788112   62747 kubeadm.go:156] found existing configuration files:
	
	I0416 01:05:19.788154   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:05:19.798135   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:05:19.798211   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:05:19.809231   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:05:19.819447   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:05:19.819519   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:05:19.830223   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:05:19.840460   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:05:19.840528   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:05:19.851506   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:05:19.861422   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:05:19.861481   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:05:19.871239   62747 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:05:20.089849   62747 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:05:29.079351   62747 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 01:05:29.079435   62747 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:05:29.079534   62747 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:05:29.079679   62747 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:05:29.079817   62747 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:05:29.079934   62747 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:05:29.081701   62747 out.go:204]   - Generating certificates and keys ...
	I0416 01:05:29.081801   62747 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:05:29.081922   62747 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:05:29.082035   62747 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:05:29.082125   62747 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:05:29.082300   62747 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:05:29.082404   62747 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:05:29.082504   62747 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:05:29.082556   62747 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:05:29.082621   62747 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:05:29.082737   62747 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:05:29.082798   62747 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:05:29.082867   62747 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:05:29.082955   62747 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:05:29.083042   62747 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 01:05:29.083129   62747 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:05:29.083209   62747 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:05:29.083278   62747 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:05:29.083385   62747 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:05:29.083467   62747 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:05:29.085050   62747 out.go:204]   - Booting up control plane ...
	I0416 01:05:29.085178   62747 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:05:29.085289   62747 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:05:29.085374   62747 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:05:29.085499   62747 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:05:29.085610   62747 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:05:29.085671   62747 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:05:29.085942   62747 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:05:29.086066   62747 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003717 seconds
	I0416 01:05:29.086227   62747 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 01:05:29.086384   62747 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 01:05:29.086474   62747 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 01:05:29.086755   62747 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-617092 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 01:05:29.086843   62747 kubeadm.go:309] [bootstrap-token] Using token: 33ihar.pt6l329bwmm6yhnr
	I0416 01:05:29.088273   62747 out.go:204]   - Configuring RBAC rules ...
	I0416 01:05:29.088408   62747 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 01:05:29.088516   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 01:05:29.088712   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 01:05:29.088898   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 01:05:29.089046   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 01:05:29.089196   62747 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 01:05:29.089346   62747 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 01:05:29.089413   62747 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 01:05:29.089486   62747 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 01:05:29.089496   62747 kubeadm.go:309] 
	I0416 01:05:29.089581   62747 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 01:05:29.089591   62747 kubeadm.go:309] 
	I0416 01:05:29.089707   62747 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 01:05:29.089719   62747 kubeadm.go:309] 
	I0416 01:05:29.089768   62747 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 01:05:29.089855   62747 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 01:05:29.089932   62747 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 01:05:29.089942   62747 kubeadm.go:309] 
	I0416 01:05:29.090020   62747 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 01:05:29.090041   62747 kubeadm.go:309] 
	I0416 01:05:29.090111   62747 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 01:05:29.090120   62747 kubeadm.go:309] 
	I0416 01:05:29.090193   62747 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 01:05:29.090350   62747 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 01:05:29.090434   62747 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 01:05:29.090445   62747 kubeadm.go:309] 
	I0416 01:05:29.090560   62747 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 01:05:29.090661   62747 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 01:05:29.090667   62747 kubeadm.go:309] 
	I0416 01:05:29.090773   62747 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 33ihar.pt6l329bwmm6yhnr \
	I0416 01:05:29.090921   62747 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0416 01:05:29.090942   62747 kubeadm.go:309] 	--control-plane 
	I0416 01:05:29.090948   62747 kubeadm.go:309] 
	I0416 01:05:29.091017   62747 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 01:05:29.091034   62747 kubeadm.go:309] 
	I0416 01:05:29.091153   62747 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 33ihar.pt6l329bwmm6yhnr \
	I0416 01:05:29.091299   62747 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0416 01:05:29.091313   62747 cni.go:84] Creating CNI manager for ""
	I0416 01:05:29.091323   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:05:29.094154   62747 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:05:29.095747   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:05:29.153706   62747 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:05:29.195477   62747 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 01:05:29.195540   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:29.195540   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-617092 minikube.k8s.io/updated_at=2024_04_16T01_05_29_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=embed-certs-617092 minikube.k8s.io/primary=true
	I0416 01:05:29.551888   62747 ops.go:34] apiserver oom_adj: -16
	I0416 01:05:29.552023   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:30.053117   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:30.552298   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:31.052317   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:31.553057   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:32.052852   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:32.552921   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:34.259492   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:34.259704   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:33.052747   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:33.552301   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:34.052922   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:34.552338   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:35.052106   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:35.552911   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:36.052814   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:36.552077   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:37.052666   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:37.552057   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:38.053198   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:38.552163   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:39.052589   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:39.552701   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:40.053069   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:40.552436   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:41.053071   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:41.158552   62747 kubeadm.go:1107] duration metric: took 11.963074905s to wait for elevateKubeSystemPrivileges
	W0416 01:05:41.158601   62747 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:05:41.158611   62747 kubeadm.go:393] duration metric: took 5m14.369080866s to StartCluster
	I0416 01:05:41.158638   62747 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:41.158736   62747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:05:41.160903   62747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:41.161229   62747 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:05:41.163312   62747 out.go:177] * Verifying Kubernetes components...
	I0416 01:05:40.562916   61267 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.348033752s)
	I0416 01:05:40.562991   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:40.580700   61267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:05:40.592069   61267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:05:40.606450   61267 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:05:40.606477   61267 kubeadm.go:156] found existing configuration files:
	
	I0416 01:05:40.606531   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0416 01:05:40.617547   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:05:40.617622   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:05:40.631465   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0416 01:05:40.644464   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:05:40.644553   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:05:40.655929   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0416 01:05:40.664995   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:05:40.665059   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:05:40.674477   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0416 01:05:40.683500   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:05:40.683570   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:05:40.693774   61267 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:05:40.753612   61267 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 01:05:40.753717   61267 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:05:40.911483   61267 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:05:40.911609   61267 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:05:40.911748   61267 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:05:41.170137   61267 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:05:41.161331   62747 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:05:41.161434   62747 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:05:41.165023   62747 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-617092"
	I0416 01:05:41.165044   62747 addons.go:69] Setting metrics-server=true in profile "embed-certs-617092"
	I0416 01:05:41.165081   62747 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-617092"
	I0416 01:05:41.165084   62747 addons.go:234] Setting addon metrics-server=true in "embed-certs-617092"
	W0416 01:05:41.165090   62747 addons.go:243] addon storage-provisioner should already be in state true
	W0416 01:05:41.165091   62747 addons.go:243] addon metrics-server should already be in state true
	I0416 01:05:41.165117   62747 host.go:66] Checking if "embed-certs-617092" exists ...
	I0416 01:05:41.165052   62747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:05:41.165025   62747 addons.go:69] Setting default-storageclass=true in profile "embed-certs-617092"
	I0416 01:05:41.165117   62747 host.go:66] Checking if "embed-certs-617092" exists ...
	I0416 01:05:41.165174   62747 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-617092"
	I0416 01:05:41.165464   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.165480   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.165549   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.165569   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.165549   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.165651   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.183063   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46083
	I0416 01:05:41.183551   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.184135   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.184158   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.184578   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.185298   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.185337   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.185763   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43633
	I0416 01:05:41.185823   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46197
	I0416 01:05:41.186233   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.186400   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.186701   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.186726   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.186861   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.186881   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.187211   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.187233   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.187415   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.187763   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.187781   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.191018   62747 addons.go:234] Setting addon default-storageclass=true in "embed-certs-617092"
	W0416 01:05:41.191038   62747 addons.go:243] addon default-storageclass should already be in state true
	I0416 01:05:41.191068   62747 host.go:66] Checking if "embed-certs-617092" exists ...
	I0416 01:05:41.191346   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.191384   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.202643   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45285
	I0416 01:05:41.203122   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.203607   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.203627   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.203952   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.204124   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.204325   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45643
	I0416 01:05:41.204721   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.205188   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.205207   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.205860   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.206056   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.206084   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:05:41.208051   62747 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 01:05:41.209179   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 01:05:41.209197   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 01:05:41.207724   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:05:41.209214   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:05:41.210728   62747 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:05:41.171860   61267 out.go:204]   - Generating certificates and keys ...
	I0416 01:05:41.171969   61267 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:05:41.172043   61267 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:05:41.172139   61267 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:05:41.172803   61267 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:05:41.173065   61267 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:05:41.173653   61267 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:05:41.174077   61267 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:05:41.174586   61267 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:05:41.175034   61267 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:05:41.175570   61267 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:05:41.175888   61267 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:05:41.175968   61267 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:05:41.439471   61267 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:05:41.524693   61267 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 01:05:42.001762   61267 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:05:42.139805   61267 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:05:42.198091   61267 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:05:42.198762   61267 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:05:42.202915   61267 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:05:42.204549   61267 out.go:204]   - Booting up control plane ...
	I0416 01:05:42.204673   61267 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:05:42.204816   61267 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:05:42.205761   61267 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:05:42.225187   61267 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:05:42.225917   61267 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:05:42.225972   61267 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:05:42.367087   61267 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:05:41.210575   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34385
	I0416 01:05:41.211905   62747 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:41.211923   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:05:41.211942   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:05:41.212835   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.212972   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.213577   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.213597   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.213610   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:05:41.213628   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.214039   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.214657   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.214693   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.215005   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:05:41.215635   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.215905   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:05:41.215933   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.216058   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:05:41.216109   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:05:41.216242   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:05:41.216303   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:05:41.216447   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:05:41.216466   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:05:41.216544   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:05:41.236284   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40007
	I0416 01:05:41.237670   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.238270   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.238288   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.241258   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.241453   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.243397   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:05:41.243724   62747 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:41.243740   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:05:41.243758   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:05:41.247426   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.248034   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:05:41.248144   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.248423   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:05:41.249376   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:05:41.249600   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:05:41.249799   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:05:41.414823   62747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:05:41.436007   62747 node_ready.go:35] waiting up to 6m0s for node "embed-certs-617092" to be "Ready" ...
	I0416 01:05:41.452344   62747 node_ready.go:49] node "embed-certs-617092" has status "Ready":"True"
	I0416 01:05:41.452370   62747 node_ready.go:38] duration metric: took 16.328329ms for node "embed-certs-617092" to be "Ready" ...
	I0416 01:05:41.452382   62747 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:41.467673   62747 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.477985   62747 pod_ready.go:92] pod "etcd-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:41.478019   62747 pod_ready.go:81] duration metric: took 10.312538ms for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.478032   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.485978   62747 pod_ready.go:92] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:41.486003   62747 pod_ready.go:81] duration metric: took 7.961029ms for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.486015   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.491586   62747 pod_ready.go:92] pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:41.491608   62747 pod_ready.go:81] duration metric: took 5.584682ms for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.491619   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p4rh9" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.591874   62747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:41.630528   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 01:05:41.630554   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 01:05:41.653822   62747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:41.718742   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 01:05:41.718775   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 01:05:41.750701   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:41.750725   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 01:05:41.798873   62747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:41.961373   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:41.961415   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:41.961857   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:41.961879   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:41.961890   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:41.961909   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:41.962200   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:41.962205   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:41.962216   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:41.974163   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:41.974189   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:41.974517   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:41.974537   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:42.721070   62747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.067206266s)
	I0416 01:05:42.721119   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:42.721130   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:42.721551   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:42.721594   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:42.721613   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:42.721636   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:42.721648   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:42.721972   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:42.721987   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:42.722006   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:43.123544   62747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.324616723s)
	I0416 01:05:43.123593   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:43.123608   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:43.123867   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:43.123906   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:43.123913   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:43.123922   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:43.123928   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:43.124218   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:43.124234   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:43.124234   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:43.124255   62747 addons.go:470] Verifying addon metrics-server=true in "embed-certs-617092"
	I0416 01:05:43.125829   62747 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0416 01:05:43.127138   62747 addons.go:505] duration metric: took 1.965815007s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0416 01:05:43.536374   62747 pod_ready.go:102] pod "kube-proxy-p4rh9" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:44.000571   62747 pod_ready.go:92] pod "kube-proxy-p4rh9" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:44.000594   62747 pod_ready.go:81] duration metric: took 2.508967748s for pod "kube-proxy-p4rh9" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:44.000603   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:44.006516   62747 pod_ready.go:92] pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:44.006540   62747 pod_ready.go:81] duration metric: took 5.930755ms for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:44.006546   62747 pod_ready.go:38] duration metric: took 2.554153393s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:44.006560   62747 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:05:44.006612   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:05:44.030705   62747 api_server.go:72] duration metric: took 2.869432993s to wait for apiserver process to appear ...
	I0416 01:05:44.030737   62747 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:05:44.030759   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:05:44.035576   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 200:
	ok
	I0416 01:05:44.037948   62747 api_server.go:141] control plane version: v1.29.3
	I0416 01:05:44.037973   62747 api_server.go:131] duration metric: took 7.228106ms to wait for apiserver health ...
	I0416 01:05:44.037983   62747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:05:44.044543   62747 system_pods.go:59] 9 kube-system pods found
	I0416 01:05:44.044574   62747 system_pods.go:61] "coredns-76f75df574-2q58l" [e9b9d000-738b-4110-8757-17f76197285c] Running
	I0416 01:05:44.044581   62747 system_pods.go:61] "coredns-76f75df574-h8k4k" [1b114848-1137-4215-a966-03db39e4de23] Running
	I0416 01:05:44.044586   62747 system_pods.go:61] "etcd-embed-certs-617092" [f65e9307-4e12-4ac4-baca-7e1cfd7415d5] Running
	I0416 01:05:44.044591   62747 system_pods.go:61] "kube-apiserver-embed-certs-617092" [f55e02ce-45cf-4f6e-b8d7-7f305f22ea52] Running
	I0416 01:05:44.044596   62747 system_pods.go:61] "kube-controller-manager-embed-certs-617092" [d16739c1-36f4-4748-8533-fcc6cea0adee] Running
	I0416 01:05:44.044601   62747 system_pods.go:61] "kube-proxy-p4rh9" [42041028-d085-4ec4-8213-da3af0d5290e] Running
	I0416 01:05:44.044606   62747 system_pods.go:61] "kube-scheduler-embed-certs-617092" [d61e24fe-a5e3-41bf-b212-75764a036a26] Running
	I0416 01:05:44.044614   62747 system_pods.go:61] "metrics-server-57f55c9bc5-j5clp" [99808b2d-344f-43b7-a29c-01f0a2026aa8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:44.044623   62747 system_pods.go:61] "storage-provisioner" [5a62c0f7-0b15-48f3-9c17-d5966d39fbd5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:44.044635   62747 system_pods.go:74] duration metric: took 6.6454ms to wait for pod list to return data ...
	I0416 01:05:44.044652   62747 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:05:44.241344   62747 default_sa.go:45] found service account: "default"
	I0416 01:05:44.241370   62747 default_sa.go:55] duration metric: took 196.710973ms for default service account to be created ...
	I0416 01:05:44.241379   62747 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:05:44.450798   62747 system_pods.go:86] 9 kube-system pods found
	I0416 01:05:44.450825   62747 system_pods.go:89] "coredns-76f75df574-2q58l" [e9b9d000-738b-4110-8757-17f76197285c] Running
	I0416 01:05:44.450831   62747 system_pods.go:89] "coredns-76f75df574-h8k4k" [1b114848-1137-4215-a966-03db39e4de23] Running
	I0416 01:05:44.450835   62747 system_pods.go:89] "etcd-embed-certs-617092" [f65e9307-4e12-4ac4-baca-7e1cfd7415d5] Running
	I0416 01:05:44.450839   62747 system_pods.go:89] "kube-apiserver-embed-certs-617092" [f55e02ce-45cf-4f6e-b8d7-7f305f22ea52] Running
	I0416 01:05:44.450844   62747 system_pods.go:89] "kube-controller-manager-embed-certs-617092" [d16739c1-36f4-4748-8533-fcc6cea0adee] Running
	I0416 01:05:44.450848   62747 system_pods.go:89] "kube-proxy-p4rh9" [42041028-d085-4ec4-8213-da3af0d5290e] Running
	I0416 01:05:44.450851   62747 system_pods.go:89] "kube-scheduler-embed-certs-617092" [d61e24fe-a5e3-41bf-b212-75764a036a26] Running
	I0416 01:05:44.450858   62747 system_pods.go:89] "metrics-server-57f55c9bc5-j5clp" [99808b2d-344f-43b7-a29c-01f0a2026aa8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:44.450864   62747 system_pods.go:89] "storage-provisioner" [5a62c0f7-0b15-48f3-9c17-d5966d39fbd5] Running
	I0416 01:05:44.450871   62747 system_pods.go:126] duration metric: took 209.487599ms to wait for k8s-apps to be running ...
	I0416 01:05:44.450889   62747 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:05:44.450943   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:44.470820   62747 system_svc.go:56] duration metric: took 19.925743ms WaitForService to wait for kubelet
	I0416 01:05:44.470853   62747 kubeadm.go:576] duration metric: took 3.309585995s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:05:44.470876   62747 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:05:44.642093   62747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:05:44.642123   62747 node_conditions.go:123] node cpu capacity is 2
	I0416 01:05:44.642135   62747 node_conditions.go:105] duration metric: took 171.253415ms to run NodePressure ...
	I0416 01:05:44.642149   62747 start.go:240] waiting for startup goroutines ...
	I0416 01:05:44.642158   62747 start.go:245] waiting for cluster config update ...
	I0416 01:05:44.642171   62747 start.go:254] writing updated cluster config ...
	I0416 01:05:44.642519   62747 ssh_runner.go:195] Run: rm -f paused
	I0416 01:05:44.707141   62747 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 01:05:44.709274   62747 out.go:177] * Done! kubectl is now configured to use "embed-certs-617092" cluster and "default" namespace by default
	I0416 01:05:48.372574   61267 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002543 seconds
	I0416 01:05:48.385076   61267 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 01:05:48.406058   61267 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 01:05:48.938329   61267 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 01:05:48.938556   61267 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-653942 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 01:05:49.458321   61267 kubeadm.go:309] [bootstrap-token] Using token: 5ddaoe.tvzldvzlkbeta1a9
	I0416 01:05:49.459891   61267 out.go:204]   - Configuring RBAC rules ...
	I0416 01:05:49.460064   61267 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 01:05:49.465799   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 01:05:49.477346   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 01:05:49.482154   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 01:05:49.485769   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 01:05:49.489199   61267 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 01:05:49.504774   61267 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 01:05:49.770133   61267 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 01:05:49.872777   61267 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 01:05:49.874282   61267 kubeadm.go:309] 
	I0416 01:05:49.874384   61267 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 01:05:49.874400   61267 kubeadm.go:309] 
	I0416 01:05:49.874560   61267 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 01:05:49.874580   61267 kubeadm.go:309] 
	I0416 01:05:49.874602   61267 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 01:05:49.874673   61267 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 01:05:49.874754   61267 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 01:05:49.874766   61267 kubeadm.go:309] 
	I0416 01:05:49.874853   61267 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 01:05:49.874878   61267 kubeadm.go:309] 
	I0416 01:05:49.874944   61267 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 01:05:49.874956   61267 kubeadm.go:309] 
	I0416 01:05:49.875019   61267 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 01:05:49.875141   61267 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 01:05:49.875246   61267 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 01:05:49.875257   61267 kubeadm.go:309] 
	I0416 01:05:49.875432   61267 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 01:05:49.875552   61267 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 01:05:49.875562   61267 kubeadm.go:309] 
	I0416 01:05:49.875657   61267 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 5ddaoe.tvzldvzlkbeta1a9 \
	I0416 01:05:49.875754   61267 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0416 01:05:49.875774   61267 kubeadm.go:309] 	--control-plane 
	I0416 01:05:49.875780   61267 kubeadm.go:309] 
	I0416 01:05:49.875859   61267 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 01:05:49.875869   61267 kubeadm.go:309] 
	I0416 01:05:49.875949   61267 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 5ddaoe.tvzldvzlkbeta1a9 \
	I0416 01:05:49.876085   61267 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0416 01:05:49.876640   61267 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:05:49.876666   61267 cni.go:84] Creating CNI manager for ""
	I0416 01:05:49.876676   61267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:05:49.878703   61267 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:05:49.880070   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:05:49.897752   61267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:05:49.969146   61267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 01:05:49.969228   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:49.969228   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-653942 minikube.k8s.io/updated_at=2024_04_16T01_05_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=default-k8s-diff-port-653942 minikube.k8s.io/primary=true
	I0416 01:05:50.233119   61267 ops.go:34] apiserver oom_adj: -16
	I0416 01:05:50.233262   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:50.733748   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:51.234361   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:51.733704   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:52.233367   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:52.733789   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:53.234012   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:53.733458   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:54.233341   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:54.734148   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:55.233710   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:55.734135   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:56.233315   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:56.734162   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:57.233899   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:57.733337   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:58.234101   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:58.734357   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:59.233831   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:59.733286   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:00.233847   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:00.733872   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:01.233935   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:01.733629   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:02.233967   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:02.734163   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:03.233294   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:03.412834   61267 kubeadm.go:1107] duration metric: took 13.44368469s to wait for elevateKubeSystemPrivileges
	W0416 01:06:03.412896   61267 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:06:03.412907   61267 kubeadm.go:393] duration metric: took 5m17.8108087s to StartCluster
	I0416 01:06:03.412926   61267 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:06:03.413003   61267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:06:03.414974   61267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:06:03.415299   61267 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.216 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:06:03.417148   61267 out.go:177] * Verifying Kubernetes components...
	I0416 01:06:03.415390   61267 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:06:03.415510   61267 config.go:182] Loaded profile config "default-k8s-diff-port-653942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:06:03.417238   61267 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-653942"
	I0416 01:06:03.419134   61267 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-653942"
	W0416 01:06:03.419147   61267 addons.go:243] addon storage-provisioner should already be in state true
	I0416 01:06:03.417247   61267 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-653942"
	I0416 01:06:03.419188   61267 host.go:66] Checking if "default-k8s-diff-port-653942" exists ...
	I0416 01:06:03.419214   61267 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-653942"
	I0416 01:06:03.417245   61267 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-653942"
	I0416 01:06:03.419095   61267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0416 01:06:03.419262   61267 addons.go:243] addon metrics-server should already be in state true
	I0416 01:06:03.419307   61267 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-653942"
	I0416 01:06:03.419327   61267 host.go:66] Checking if "default-k8s-diff-port-653942" exists ...
	I0416 01:06:03.419606   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.419644   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.419662   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.419698   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.419722   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.419756   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.435784   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
	I0416 01:06:03.435800   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37505
	I0416 01:06:03.436294   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.436296   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.436811   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.436838   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.437097   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.437115   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.437203   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.437683   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.437757   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.437790   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.438213   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33329
	I0416 01:06:03.438248   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.438273   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.438786   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.439301   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.439332   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.439810   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.440162   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.443879   61267 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-653942"
	W0416 01:06:03.443906   61267 addons.go:243] addon default-storageclass should already be in state true
	I0416 01:06:03.443941   61267 host.go:66] Checking if "default-k8s-diff-port-653942" exists ...
	I0416 01:06:03.444301   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.444340   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.454673   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I0416 01:06:03.455111   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.455715   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.455742   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.456116   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.456318   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.457870   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39341
	I0416 01:06:03.458086   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:06:03.458278   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.462516   61267 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 01:06:03.458862   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.460354   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0416 01:06:03.464491   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 01:06:03.464509   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 01:06:03.464529   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:06:03.464551   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.464960   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.465281   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.465552   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.466181   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.466205   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.466760   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.467410   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.467435   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.467638   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:06:03.469647   61267 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:06:03.471009   61267 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:06:03.471024   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:06:03.469242   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.471040   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:06:03.469767   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:06:03.471070   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:06:03.471133   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.471297   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:06:03.471478   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:06:03.471661   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:06:03.473778   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.474203   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:06:03.474226   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.474421   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:06:03.474605   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:06:03.474784   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:06:03.474958   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:06:03.485829   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0416 01:06:03.486293   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.486876   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.486900   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.487362   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.487535   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.489207   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:06:03.489529   61267 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:06:03.489549   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:06:03.489568   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:06:03.492570   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.492932   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:06:03.492958   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.493224   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:06:03.493379   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:06:03.493557   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:06:03.493673   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:06:03.680085   61267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:06:03.724011   61267 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-653942" to be "Ready" ...
	I0416 01:06:03.739131   61267 node_ready.go:49] node "default-k8s-diff-port-653942" has status "Ready":"True"
	I0416 01:06:03.739152   61267 node_ready.go:38] duration metric: took 15.111832ms for node "default-k8s-diff-port-653942" to be "Ready" ...
	I0416 01:06:03.739161   61267 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:06:03.748081   61267 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-5nnpv" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:03.810063   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 01:06:03.810090   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 01:06:03.812595   61267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:06:03.848165   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 01:06:03.848187   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 01:06:03.991110   61267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:06:03.997100   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:06:03.997133   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 01:06:04.093267   61267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:06:04.349978   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:04.350011   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:04.350336   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:04.350396   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:04.350415   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:04.350420   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:04.350425   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:04.350683   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:04.350699   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:04.416648   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:04.416674   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:04.416982   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:04.417001   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.206973   61267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113663167s)
	I0416 01:06:05.207025   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207040   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207039   61267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.215892308s)
	I0416 01:06:05.207078   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207090   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207371   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.207388   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.207397   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207405   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207445   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.207462   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:05.207466   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.207490   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207508   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207610   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.207644   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.207654   61267 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-653942"
	I0416 01:06:05.207654   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:05.209411   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:05.209402   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.209469   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.212071   61267 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0416 01:06:05.213412   61267 addons.go:505] duration metric: took 1.798038731s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0416 01:06:05.256497   61267 pod_ready.go:92] pod "coredns-76f75df574-5nnpv" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.256526   61267 pod_ready.go:81] duration metric: took 1.508419977s for pod "coredns-76f75df574-5nnpv" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.256538   61267 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-zpnhs" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.262092   61267 pod_ready.go:92] pod "coredns-76f75df574-zpnhs" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.262112   61267 pod_ready.go:81] duration metric: took 5.566499ms for pod "coredns-76f75df574-zpnhs" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.262121   61267 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.267256   61267 pod_ready.go:92] pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.267278   61267 pod_ready.go:81] duration metric: took 5.149782ms for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.267286   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.272119   61267 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.272144   61267 pod_ready.go:81] duration metric: took 4.851008ms for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.272155   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.328440   61267 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.328470   61267 pod_ready.go:81] duration metric: took 56.30531ms for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.328482   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mg5km" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.729518   61267 pod_ready.go:92] pod "kube-proxy-mg5km" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.729544   61267 pod_ready.go:81] duration metric: took 401.055058ms for pod "kube-proxy-mg5km" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.729553   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:06.127535   61267 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:06.127558   61267 pod_ready.go:81] duration metric: took 397.998988ms for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:06.127565   61267 pod_ready.go:38] duration metric: took 2.388395448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:06:06.127577   61267 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:06:06.127620   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:06:06.150179   61267 api_server.go:72] duration metric: took 2.734842767s to wait for apiserver process to appear ...
	I0416 01:06:06.150208   61267 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:06:06.150226   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:06:06.154310   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 200:
	ok
	I0416 01:06:06.155393   61267 api_server.go:141] control plane version: v1.29.3
	I0416 01:06:06.155409   61267 api_server.go:131] duration metric: took 5.194458ms to wait for apiserver health ...
	I0416 01:06:06.155421   61267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:06:06.333873   61267 system_pods.go:59] 9 kube-system pods found
	I0416 01:06:06.333909   61267 system_pods.go:61] "coredns-76f75df574-5nnpv" [3350aca5-639e-44a1-bd84-d1e4b6486143] Running
	I0416 01:06:06.333914   61267 system_pods.go:61] "coredns-76f75df574-zpnhs" [990672b6-bb3a-4f91-8de7-7c2ec224c94a] Running
	I0416 01:06:06.333917   61267 system_pods.go:61] "etcd-default-k8s-diff-port-653942" [e72e89e9-c274-4d4d-b1f9-43bea95cd015] Running
	I0416 01:06:06.333920   61267 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653942" [c1652126-b4c2-41cf-a574-9784f7800374] Running
	I0416 01:06:06.333923   61267 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653942" [1f43936c-ba39-44f9-b9b7-2a149f26a880] Running
	I0416 01:06:06.333926   61267 system_pods.go:61] "kube-proxy-mg5km" [74764194-1f31-40b1-90b5-497e248ab7da] Running
	I0416 01:06:06.333929   61267 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653942" [48058ade-c30d-4dc9-b6c0-32b2ed5fc88a] Running
	I0416 01:06:06.333935   61267 system_pods.go:61] "metrics-server-57f55c9bc5-6jn29" [1eec2ffb-ce59-45cb-b6b4-cd010549510e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:06:06.333938   61267 system_pods.go:61] "storage-provisioner" [d131c1fc-9124-4b46-a16f-a8fb5029a57b] Running
	I0416 01:06:06.333947   61267 system_pods.go:74] duration metric: took 178.520515ms to wait for pod list to return data ...
	I0416 01:06:06.333953   61267 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:06:06.528119   61267 default_sa.go:45] found service account: "default"
	I0416 01:06:06.528148   61267 default_sa.go:55] duration metric: took 194.18786ms for default service account to be created ...
	I0416 01:06:06.528158   61267 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:06:06.731573   61267 system_pods.go:86] 9 kube-system pods found
	I0416 01:06:06.731600   61267 system_pods.go:89] "coredns-76f75df574-5nnpv" [3350aca5-639e-44a1-bd84-d1e4b6486143] Running
	I0416 01:06:06.731606   61267 system_pods.go:89] "coredns-76f75df574-zpnhs" [990672b6-bb3a-4f91-8de7-7c2ec224c94a] Running
	I0416 01:06:06.731610   61267 system_pods.go:89] "etcd-default-k8s-diff-port-653942" [e72e89e9-c274-4d4d-b1f9-43bea95cd015] Running
	I0416 01:06:06.731614   61267 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-653942" [c1652126-b4c2-41cf-a574-9784f7800374] Running
	I0416 01:06:06.731619   61267 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-653942" [1f43936c-ba39-44f9-b9b7-2a149f26a880] Running
	I0416 01:06:06.731622   61267 system_pods.go:89] "kube-proxy-mg5km" [74764194-1f31-40b1-90b5-497e248ab7da] Running
	I0416 01:06:06.731626   61267 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-653942" [48058ade-c30d-4dc9-b6c0-32b2ed5fc88a] Running
	I0416 01:06:06.731633   61267 system_pods.go:89] "metrics-server-57f55c9bc5-6jn29" [1eec2ffb-ce59-45cb-b6b4-cd010549510e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:06:06.731638   61267 system_pods.go:89] "storage-provisioner" [d131c1fc-9124-4b46-a16f-a8fb5029a57b] Running
	I0416 01:06:06.731649   61267 system_pods.go:126] duration metric: took 203.485273ms to wait for k8s-apps to be running ...
	I0416 01:06:06.731659   61267 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:06:06.731700   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:06:06.749013   61267 system_svc.go:56] duration metric: took 17.343008ms WaitForService to wait for kubelet
	I0416 01:06:06.749048   61267 kubeadm.go:576] duration metric: took 3.333716529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:06:06.749072   61267 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:06:06.927701   61267 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:06:06.927725   61267 node_conditions.go:123] node cpu capacity is 2
	I0416 01:06:06.927735   61267 node_conditions.go:105] duration metric: took 178.65899ms to run NodePressure ...
	I0416 01:06:06.927746   61267 start.go:240] waiting for startup goroutines ...
	I0416 01:06:06.927754   61267 start.go:245] waiting for cluster config update ...
	I0416 01:06:06.927763   61267 start.go:254] writing updated cluster config ...
	I0416 01:06:06.928000   61267 ssh_runner.go:195] Run: rm -f paused
	I0416 01:06:06.978823   61267 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 01:06:06.981011   61267 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-653942" cluster and "default" namespace by default
	I0416 01:06:14.261576   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:06:14.261834   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:06:14.261849   62139 kubeadm.go:309] 
	I0416 01:06:14.261890   62139 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 01:06:14.261973   62139 kubeadm.go:309] 		timed out waiting for the condition
	I0416 01:06:14.262006   62139 kubeadm.go:309] 
	I0416 01:06:14.262051   62139 kubeadm.go:309] 	This error is likely caused by:
	I0416 01:06:14.262082   62139 kubeadm.go:309] 		- The kubelet is not running
	I0416 01:06:14.262174   62139 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 01:06:14.262199   62139 kubeadm.go:309] 
	I0416 01:06:14.262357   62139 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 01:06:14.262414   62139 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 01:06:14.262471   62139 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 01:06:14.262481   62139 kubeadm.go:309] 
	I0416 01:06:14.262610   62139 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 01:06:14.262707   62139 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 01:06:14.262717   62139 kubeadm.go:309] 
	I0416 01:06:14.262867   62139 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 01:06:14.263010   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 01:06:14.263142   62139 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 01:06:14.263211   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 01:06:14.263234   62139 kubeadm.go:309] 
	I0416 01:06:14.264084   62139 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:06:14.264204   62139 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 01:06:14.264312   62139 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0416 01:06:14.264460   62139 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0416 01:06:14.264526   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:06:15.653692   62139 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.389136497s)
	I0416 01:06:15.653831   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:06:15.669141   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:06:15.679485   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:06:15.679511   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:06:15.679556   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:06:15.689898   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:06:15.689974   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:06:15.700563   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:06:15.710363   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:06:15.710445   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:06:15.719877   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:06:15.728947   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:06:15.729002   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:06:15.739360   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:06:15.749479   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:06:15.749557   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:06:15.760930   62139 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:06:16.000974   62139 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:08:12.327133   62139 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 01:08:12.327246   62139 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0416 01:08:12.328995   62139 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 01:08:12.329092   62139 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:08:12.329220   62139 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:08:12.329302   62139 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:08:12.329440   62139 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:08:12.329537   62139 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:08:12.331381   62139 out.go:204]   - Generating certificates and keys ...
	I0416 01:08:12.331474   62139 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:08:12.331558   62139 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:08:12.331658   62139 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:08:12.331742   62139 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:08:12.331830   62139 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:08:12.331910   62139 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:08:12.331968   62139 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:08:12.332020   62139 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:08:12.332085   62139 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:08:12.332159   62139 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:08:12.332210   62139 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:08:12.332297   62139 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:08:12.332376   62139 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:08:12.332466   62139 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:08:12.332547   62139 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:08:12.332642   62139 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:08:12.332790   62139 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:08:12.332895   62139 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:08:12.332938   62139 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:08:12.333002   62139 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:08:12.334632   62139 out.go:204]   - Booting up control plane ...
	I0416 01:08:12.334737   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:08:12.334837   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:08:12.334928   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:08:12.335009   62139 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:08:12.335162   62139 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:08:12.335241   62139 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 01:08:12.335333   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.335541   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.335613   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.335771   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.335848   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336035   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336109   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336365   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336438   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336704   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336716   62139 kubeadm.go:309] 
	I0416 01:08:12.336779   62139 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 01:08:12.336827   62139 kubeadm.go:309] 		timed out waiting for the condition
	I0416 01:08:12.336834   62139 kubeadm.go:309] 
	I0416 01:08:12.336883   62139 kubeadm.go:309] 	This error is likely caused by:
	I0416 01:08:12.336922   62139 kubeadm.go:309] 		- The kubelet is not running
	I0416 01:08:12.337025   62139 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 01:08:12.337036   62139 kubeadm.go:309] 
	I0416 01:08:12.337145   62139 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 01:08:12.337211   62139 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 01:08:12.337245   62139 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 01:08:12.337253   62139 kubeadm.go:309] 
	I0416 01:08:12.337340   62139 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 01:08:12.337428   62139 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 01:08:12.337436   62139 kubeadm.go:309] 
	I0416 01:08:12.337529   62139 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 01:08:12.337602   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 01:08:12.337701   62139 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 01:08:12.337870   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 01:08:12.337957   62139 kubeadm.go:393] duration metric: took 8m4.174818047s to StartCluster
	I0416 01:08:12.337969   62139 kubeadm.go:309] 
	I0416 01:08:12.338009   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:08:12.338067   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:08:12.391937   62139 cri.go:89] found id: ""
	I0416 01:08:12.391963   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.391986   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:08:12.391994   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:08:12.392072   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:08:12.430575   62139 cri.go:89] found id: ""
	I0416 01:08:12.430602   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.430616   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:08:12.430623   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:08:12.430685   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:08:12.469115   62139 cri.go:89] found id: ""
	I0416 01:08:12.469143   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.469152   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:08:12.469173   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:08:12.469228   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:08:12.508599   62139 cri.go:89] found id: ""
	I0416 01:08:12.508630   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.508640   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:08:12.508648   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:08:12.508698   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:08:12.547785   62139 cri.go:89] found id: ""
	I0416 01:08:12.547817   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.547829   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:08:12.547836   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:08:12.547910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:08:12.599526   62139 cri.go:89] found id: ""
	I0416 01:08:12.599549   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.599557   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:08:12.599563   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:08:12.599612   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:08:12.639914   62139 cri.go:89] found id: ""
	I0416 01:08:12.639944   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.639954   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:08:12.639962   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:08:12.640041   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:08:12.676025   62139 cri.go:89] found id: ""
	I0416 01:08:12.676057   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.676066   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:08:12.676079   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:08:12.676100   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:08:12.774744   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:08:12.774769   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:08:12.774785   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:08:12.902751   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:08:12.902787   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:08:12.947370   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:08:12.947406   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:08:13.002186   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:08:13.002223   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0416 01:08:13.017193   62139 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0416 01:08:13.017234   62139 out.go:239] * 
	W0416 01:08:13.017283   62139 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 01:08:13.017304   62139 out.go:239] * 
	W0416 01:08:13.018151   62139 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 01:08:13.021371   62139 out.go:177] 
	W0416 01:08:13.022572   62139 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 01:08:13.022640   62139 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0416 01:08:13.022670   62139 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0416 01:08:13.024248   62139 out.go:177] 
	
	
	==> CRI-O <==
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.032966127Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c604855d-e174-4944-9803-9ec259b8380c name=/runtime.v1.RuntimeService/Version
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.036854976Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17b65e23-4604-4f8e-a0fd-4de6f6ebad9b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.037435077Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230109037322400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17b65e23-4604-4f8e-a0fd-4de6f6ebad9b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.038130680Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93a87247-dac0-4c8d-9b5c-12420cc8ea8a name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.038220065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93a87247-dac0-4c8d-9b5c-12420cc8ea8a name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.038559355Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4ddd594d7334d0604fd41c36bda70b56c09e95f393c080374978e5783c53f6d,PodSandboxId:43d8a636bc0f240f869482ade4f50c4f032894ad853abe8d6d28ebaea502c41b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229565603417369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d131c1fc-9124-4b46-a16f-a8fb5029a57b,},Annotations:map[string]string{io.kubernetes.container.hash: 27457e9a,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c41790569cbe9a105aa5b3e904bdf461b0c6b1b9e64053f82f73cdff95cea28,PodSandboxId:61a7932db1c4e83f4cf320bbce30df3b5a9b3ab4ac956460df9324b30e32f5f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229564042917327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zpnhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 990672b6-bb3a-4f91-8de7-7c2ec224c94a,},Annotations:map[string]string{io.kubernetes.container.hash: ad74a928,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b1e8894217a61058b0ac838ea292710915b108ad0417eb72b6302ddaf9e3d2,PodSandboxId:3d5fd5064cf9db355f19293b3000fc3efc42b727a4a120cc50a3d1fa129e96a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229564127266379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5nnpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3350aca5-639e-44a1-bd84-d1e4b6486143,},Annotations:map[string]string{io.kubernetes.container.hash: 790c3ff0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffc152b91a92e5cdc34e606f40d065686234148d7797e82423044fa41c461cd,PodSandboxId:c56c92061323a12b7649ec6dd0fb3d46f1f73c566d613aae3feec63a83f4aae8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1713229563406353932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mg5km,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74764194-1f31-40b1-90b5-497e248ab7da,},Annotations:map[string]string{io.kubernetes.container.hash: 37bec7a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:790f3485688cb7e8ce2b6eaa4acfac3f546e15f3b3d3011fb7d3babb7e28d508,PodSandboxId:9b015167635ab9ad101c5aa226284ddd0122e478292139bc942af14687c8491e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:171322954370919875
8,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f789bc57c1f4c290ab8fd275d2010d6a,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73dd87507a5ddd51947b5702eca37ec8a3c1d395ce443b1ca82fbd1d955329e6,PodSandboxId:eb65098ef8e126b3bbf9beff1cb98e3d528f467ab1c0568770988d0414c6ff79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:17132295436
97138964,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee61dbfd28bab5575146238429925f,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cb8787026c8eadc21223e448145ec1f2032be0623aa0e3d20d4a5680f0d26fb,PodSandboxId:177c67d63aab2abb2c6c4a6962a793f26a17c920f511526d529265155cb89ce4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17132
29543671489451,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 027e6feb7cb85911f362954ce5f74701,},Annotations:map[string]string{io.kubernetes.container.hash: 41d79b00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e135f634e26f7cb1cb96c36197c8120329ba60121e04fcf857d389b138d5879,PodSandboxId:697d63ff93426c3e8636123ffb9438aa3699fb072516fb61f83171ed69528a5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713229543615136997,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a0243852318f4be1b779e458bfa57d,},Annotations:map[string]string{io.kubernetes.container.hash: febc2576,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ccaef892bf194c7025d3249065ddab09c8dd78f7be86a4b7b0aa63921817f7,PodSandboxId:a3ce46d6060154a9c776b7a99b0ee19bb2a188d3c754c8c762fe719488763730,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713229247881428557,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a0243852318f4be1b779e458bfa57d,},Annotations:map[string]string{io.kubernetes.container.hash: febc2576,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=93a87247-dac0-4c8d-9b5c-12420cc8ea8a name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.082377468Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6f409b48-7c20-4375-9565-b2beda2ac2ff name=/runtime.v1.RuntimeService/Version
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.082481170Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6f409b48-7c20-4375-9565-b2beda2ac2ff name=/runtime.v1.RuntimeService/Version
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.084207946Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=765825bf-b532-4c90-8fa0-3f6ce27ed3e0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.084614706Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230109084593178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=765825bf-b532-4c90-8fa0-3f6ce27ed3e0 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.085247929Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=600d037d-cf57-40bc-a666-af8bc58ea7d4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.085315132Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=600d037d-cf57-40bc-a666-af8bc58ea7d4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.085544899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4ddd594d7334d0604fd41c36bda70b56c09e95f393c080374978e5783c53f6d,PodSandboxId:43d8a636bc0f240f869482ade4f50c4f032894ad853abe8d6d28ebaea502c41b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229565603417369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d131c1fc-9124-4b46-a16f-a8fb5029a57b,},Annotations:map[string]string{io.kubernetes.container.hash: 27457e9a,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c41790569cbe9a105aa5b3e904bdf461b0c6b1b9e64053f82f73cdff95cea28,PodSandboxId:61a7932db1c4e83f4cf320bbce30df3b5a9b3ab4ac956460df9324b30e32f5f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229564042917327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zpnhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 990672b6-bb3a-4f91-8de7-7c2ec224c94a,},Annotations:map[string]string{io.kubernetes.container.hash: ad74a928,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b1e8894217a61058b0ac838ea292710915b108ad0417eb72b6302ddaf9e3d2,PodSandboxId:3d5fd5064cf9db355f19293b3000fc3efc42b727a4a120cc50a3d1fa129e96a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229564127266379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5nnpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3350aca5-639e-44a1-bd84-d1e4b6486143,},Annotations:map[string]string{io.kubernetes.container.hash: 790c3ff0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffc152b91a92e5cdc34e606f40d065686234148d7797e82423044fa41c461cd,PodSandboxId:c56c92061323a12b7649ec6dd0fb3d46f1f73c566d613aae3feec63a83f4aae8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1713229563406353932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mg5km,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74764194-1f31-40b1-90b5-497e248ab7da,},Annotations:map[string]string{io.kubernetes.container.hash: 37bec7a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:790f3485688cb7e8ce2b6eaa4acfac3f546e15f3b3d3011fb7d3babb7e28d508,PodSandboxId:9b015167635ab9ad101c5aa226284ddd0122e478292139bc942af14687c8491e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:171322954370919875
8,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f789bc57c1f4c290ab8fd275d2010d6a,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73dd87507a5ddd51947b5702eca37ec8a3c1d395ce443b1ca82fbd1d955329e6,PodSandboxId:eb65098ef8e126b3bbf9beff1cb98e3d528f467ab1c0568770988d0414c6ff79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:17132295436
97138964,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee61dbfd28bab5575146238429925f,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cb8787026c8eadc21223e448145ec1f2032be0623aa0e3d20d4a5680f0d26fb,PodSandboxId:177c67d63aab2abb2c6c4a6962a793f26a17c920f511526d529265155cb89ce4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17132
29543671489451,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 027e6feb7cb85911f362954ce5f74701,},Annotations:map[string]string{io.kubernetes.container.hash: 41d79b00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e135f634e26f7cb1cb96c36197c8120329ba60121e04fcf857d389b138d5879,PodSandboxId:697d63ff93426c3e8636123ffb9438aa3699fb072516fb61f83171ed69528a5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713229543615136997,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a0243852318f4be1b779e458bfa57d,},Annotations:map[string]string{io.kubernetes.container.hash: febc2576,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ccaef892bf194c7025d3249065ddab09c8dd78f7be86a4b7b0aa63921817f7,PodSandboxId:a3ce46d6060154a9c776b7a99b0ee19bb2a188d3c754c8c762fe719488763730,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713229247881428557,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a0243852318f4be1b779e458bfa57d,},Annotations:map[string]string{io.kubernetes.container.hash: febc2576,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=600d037d-cf57-40bc-a666-af8bc58ea7d4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.121071271Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=945e687e-8c03-462a-b38c-84049c4d14a8 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.121165832Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=945e687e-8c03-462a-b38c-84049c4d14a8 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.123254114Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34c9aea0-08b6-46ce-a2c5-9ee9fbcf8aff name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.124070996Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230109123924355,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34c9aea0-08b6-46ce-a2c5-9ee9fbcf8aff name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.127571503Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=96b6f396-ab65-4cc5-b298-9c02f13fe563 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.127938546Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:43d8a636bc0f240f869482ade4f50c4f032894ad853abe8d6d28ebaea502c41b,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d131c1fc-9124-4b46-a16f-a8fb5029a57b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713229565505409788,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d131c1fc-9124-4b46-a16f-a8fb5029a57b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespac
e\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-16T01:06:05.197351638Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:993aaf952ba72ca5f93327615208098a38778fd776e90fdcb5551d534d45b2dd,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-6jn29,Uid:1eec2ffb-ce59-45cb-b6b4-cd010549510e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713229565265336472,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-6jn29,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1eec2ffb-ce59-45cb-b6b4-c
d010549510e,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T01:06:04.958957474Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3d5fd5064cf9db355f19293b3000fc3efc42b727a4a120cc50a3d1fa129e96a5,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-5nnpv,Uid:3350aca5-639e-44a1-bd84-d1e4b6486143,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713229563283528126,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-5nnpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3350aca5-639e-44a1-bd84-d1e4b6486143,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T01:06:02.974282973Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:61a7932db1c4e83f4cf320bbce30df3b5a9b3ab4ac956460df9324b30e32f5f1,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-zpnhs,Uid:990672b6
-bb3a-4f91-8de7-7c2ec224c94a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713229563265926874,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-zpnhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 990672b6-bb3a-4f91-8de7-7c2ec224c94a,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T01:06:02.950565704Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c56c92061323a12b7649ec6dd0fb3d46f1f73c566d613aae3feec63a83f4aae8,Metadata:&PodSandboxMetadata{Name:kube-proxy-mg5km,Uid:74764194-1f31-40b1-90b5-497e248ab7da,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713229563213209838,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mg5km,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74764194-1f31-40b1-90b5-497e248ab7da,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T01:06:02.900990807Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eb65098ef8e126b3bbf9beff1cb98e3d528f467ab1c0568770988d0414c6ff79,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-653942,Uid:cdee61dbfd28bab5575146238429925f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713229543459989619,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee61dbfd28bab5575146238429925f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: cdee61dbfd28bab5575146238429925f,kubernetes.io/config.seen: 2024-04-16T01:05:42.962234574Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9b015167635ab9ad101c5aa226284ddd0122e478292139bc942af146
87c8491e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-653942,Uid:f789bc57c1f4c290ab8fd275d2010d6a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713229543438593897,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f789bc57c1f4c290ab8fd275d2010d6a,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f789bc57c1f4c290ab8fd275d2010d6a,kubernetes.io/config.seen: 2024-04-16T01:05:42.962228774Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:177c67d63aab2abb2c6c4a6962a793f26a17c920f511526d529265155cb89ce4,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-653942,Uid:027e6feb7cb85911f362954ce5f74701,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713229543411543673,Labels:map[string]string{component: etcd,io.kubernetes.container.name: PO
D,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 027e6feb7cb85911f362954ce5f74701,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.216:2379,kubernetes.io/config.hash: 027e6feb7cb85911f362954ce5f74701,kubernetes.io/config.seen: 2024-04-16T01:05:42.962232552Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:697d63ff93426c3e8636123ffb9438aa3699fb072516fb61f83171ed69528a5c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-653942,Uid:42a0243852318f4be1b779e458bfa57d,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713229543410070646,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a0243852318f4be1b779e458bfa57d,tier: control-plane,},Annotations:ma
p[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.216:8444,kubernetes.io/config.hash: 42a0243852318f4be1b779e458bfa57d,kubernetes.io/config.seen: 2024-04-16T01:05:42.962233674Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a3ce46d6060154a9c776b7a99b0ee19bb2a188d3c754c8c762fe719488763730,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-653942,Uid:42a0243852318f4be1b779e458bfa57d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713229247722390476,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a0243852318f4be1b779e458bfa57d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.216:8444,kubernetes.io/config.hash: 42a0243852318f4be1b779e458bfa57d,kubernetes.io/config.s
een: 2024-04-16T01:00:47.238858804Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=96b6f396-ab65-4cc5-b298-9c02f13fe563 name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.129234244Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=46ba87fd-b8b8-4503-ac3c-d2c6f19896b5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.129309448Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=46ba87fd-b8b8-4503-ac3c-d2c6f19896b5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.129520331Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4ddd594d7334d0604fd41c36bda70b56c09e95f393c080374978e5783c53f6d,PodSandboxId:43d8a636bc0f240f869482ade4f50c4f032894ad853abe8d6d28ebaea502c41b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229565603417369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d131c1fc-9124-4b46-a16f-a8fb5029a57b,},Annotations:map[string]string{io.kubernetes.container.hash: 27457e9a,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c41790569cbe9a105aa5b3e904bdf461b0c6b1b9e64053f82f73cdff95cea28,PodSandboxId:61a7932db1c4e83f4cf320bbce30df3b5a9b3ab4ac956460df9324b30e32f5f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229564042917327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zpnhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 990672b6-bb3a-4f91-8de7-7c2ec224c94a,},Annotations:map[string]string{io.kubernetes.container.hash: ad74a928,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b1e8894217a61058b0ac838ea292710915b108ad0417eb72b6302ddaf9e3d2,PodSandboxId:3d5fd5064cf9db355f19293b3000fc3efc42b727a4a120cc50a3d1fa129e96a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229564127266379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5nnpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3350aca5-639e-44a1-bd84-d1e4b6486143,},Annotations:map[string]string{io.kubernetes.container.hash: 790c3ff0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffc152b91a92e5cdc34e606f40d065686234148d7797e82423044fa41c461cd,PodSandboxId:c56c92061323a12b7649ec6dd0fb3d46f1f73c566d613aae3feec63a83f4aae8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1713229563406353932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mg5km,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74764194-1f31-40b1-90b5-497e248ab7da,},Annotations:map[string]string{io.kubernetes.container.hash: 37bec7a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:790f3485688cb7e8ce2b6eaa4acfac3f546e15f3b3d3011fb7d3babb7e28d508,PodSandboxId:9b015167635ab9ad101c5aa226284ddd0122e478292139bc942af14687c8491e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:171322954370919875
8,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f789bc57c1f4c290ab8fd275d2010d6a,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73dd87507a5ddd51947b5702eca37ec8a3c1d395ce443b1ca82fbd1d955329e6,PodSandboxId:eb65098ef8e126b3bbf9beff1cb98e3d528f467ab1c0568770988d0414c6ff79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:17132295436
97138964,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee61dbfd28bab5575146238429925f,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cb8787026c8eadc21223e448145ec1f2032be0623aa0e3d20d4a5680f0d26fb,PodSandboxId:177c67d63aab2abb2c6c4a6962a793f26a17c920f511526d529265155cb89ce4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17132
29543671489451,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 027e6feb7cb85911f362954ce5f74701,},Annotations:map[string]string{io.kubernetes.container.hash: 41d79b00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e135f634e26f7cb1cb96c36197c8120329ba60121e04fcf857d389b138d5879,PodSandboxId:697d63ff93426c3e8636123ffb9438aa3699fb072516fb61f83171ed69528a5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713229543615136997,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a0243852318f4be1b779e458bfa57d,},Annotations:map[string]string{io.kubernetes.container.hash: febc2576,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ccaef892bf194c7025d3249065ddab09c8dd78f7be86a4b7b0aa63921817f7,PodSandboxId:a3ce46d6060154a9c776b7a99b0ee19bb2a188d3c754c8c762fe719488763730,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713229247881428557,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a0243852318f4be1b779e458bfa57d,},Annotations:map[string]string{io.kubernetes.container.hash: febc2576,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=46ba87fd-b8b8-4503-ac3c-d2c6f19896b5 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.132054264Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bb340760-064a-404b-bcf8-7fde6613eceb name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.132125179Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bb340760-064a-404b-bcf8-7fde6613eceb name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:15:09 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:15:09.132351948Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4ddd594d7334d0604fd41c36bda70b56c09e95f393c080374978e5783c53f6d,PodSandboxId:43d8a636bc0f240f869482ade4f50c4f032894ad853abe8d6d28ebaea502c41b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229565603417369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d131c1fc-9124-4b46-a16f-a8fb5029a57b,},Annotations:map[string]string{io.kubernetes.container.hash: 27457e9a,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c41790569cbe9a105aa5b3e904bdf461b0c6b1b9e64053f82f73cdff95cea28,PodSandboxId:61a7932db1c4e83f4cf320bbce30df3b5a9b3ab4ac956460df9324b30e32f5f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229564042917327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zpnhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 990672b6-bb3a-4f91-8de7-7c2ec224c94a,},Annotations:map[string]string{io.kubernetes.container.hash: ad74a928,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b1e8894217a61058b0ac838ea292710915b108ad0417eb72b6302ddaf9e3d2,PodSandboxId:3d5fd5064cf9db355f19293b3000fc3efc42b727a4a120cc50a3d1fa129e96a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229564127266379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5nnpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3350aca5-639e-44a1-bd84-d1e4b6486143,},Annotations:map[string]string{io.kubernetes.container.hash: 790c3ff0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffc152b91a92e5cdc34e606f40d065686234148d7797e82423044fa41c461cd,PodSandboxId:c56c92061323a12b7649ec6dd0fb3d46f1f73c566d613aae3feec63a83f4aae8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1713229563406353932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mg5km,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74764194-1f31-40b1-90b5-497e248ab7da,},Annotations:map[string]string{io.kubernetes.container.hash: 37bec7a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:790f3485688cb7e8ce2b6eaa4acfac3f546e15f3b3d3011fb7d3babb7e28d508,PodSandboxId:9b015167635ab9ad101c5aa226284ddd0122e478292139bc942af14687c8491e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:171322954370919875
8,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f789bc57c1f4c290ab8fd275d2010d6a,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73dd87507a5ddd51947b5702eca37ec8a3c1d395ce443b1ca82fbd1d955329e6,PodSandboxId:eb65098ef8e126b3bbf9beff1cb98e3d528f467ab1c0568770988d0414c6ff79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:17132295436
97138964,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee61dbfd28bab5575146238429925f,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cb8787026c8eadc21223e448145ec1f2032be0623aa0e3d20d4a5680f0d26fb,PodSandboxId:177c67d63aab2abb2c6c4a6962a793f26a17c920f511526d529265155cb89ce4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17132
29543671489451,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 027e6feb7cb85911f362954ce5f74701,},Annotations:map[string]string{io.kubernetes.container.hash: 41d79b00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e135f634e26f7cb1cb96c36197c8120329ba60121e04fcf857d389b138d5879,PodSandboxId:697d63ff93426c3e8636123ffb9438aa3699fb072516fb61f83171ed69528a5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713229543615136997,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a0243852318f4be1b779e458bfa57d,},Annotations:map[string]string{io.kubernetes.container.hash: febc2576,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ccaef892bf194c7025d3249065ddab09c8dd78f7be86a4b7b0aa63921817f7,PodSandboxId:a3ce46d6060154a9c776b7a99b0ee19bb2a188d3c754c8c762fe719488763730,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713229247881428557,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a0243852318f4be1b779e458bfa57d,},Annotations:map[string]string{io.kubernetes.container.hash: febc2576,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bb340760-064a-404b-bcf8-7fde6613eceb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c4ddd594d7334       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   43d8a636bc0f2       storage-provisioner
	a5b1e8894217a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   3d5fd5064cf9d       coredns-76f75df574-5nnpv
	9c41790569cbe       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   61a7932db1c4e       coredns-76f75df574-zpnhs
	7ffc152b91a92       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   9 minutes ago       Running             kube-proxy                0                   c56c92061323a       kube-proxy-mg5km
	790f3485688cb       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   9 minutes ago       Running             kube-scheduler            2                   9b015167635ab       kube-scheduler-default-k8s-diff-port-653942
	73dd87507a5dd       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   9 minutes ago       Running             kube-controller-manager   2                   eb65098ef8e12       kube-controller-manager-default-k8s-diff-port-653942
	6cb8787026c8e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   9 minutes ago       Running             etcd                      2                   177c67d63aab2       etcd-default-k8s-diff-port-653942
	8e135f634e26f       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   9 minutes ago       Running             kube-apiserver            2                   697d63ff93426       kube-apiserver-default-k8s-diff-port-653942
	d4ccaef892bf1       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   14 minutes ago      Exited              kube-apiserver            1                   a3ce46d606015       kube-apiserver-default-k8s-diff-port-653942
	
	
	==> coredns [9c41790569cbe9a105aa5b3e904bdf461b0c6b1b9e64053f82f73cdff95cea28] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [a5b1e8894217a61058b0ac838ea292710915b108ad0417eb72b6302ddaf9e3d2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-653942
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-653942
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=default-k8s-diff-port-653942
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T01_05_49_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 01:05:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-653942
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 01:15:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 01:11:16 +0000   Tue, 16 Apr 2024 01:05:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 01:11:16 +0000   Tue, 16 Apr 2024 01:05:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 01:11:16 +0000   Tue, 16 Apr 2024 01:05:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 01:11:16 +0000   Tue, 16 Apr 2024 01:05:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.216
	  Hostname:    default-k8s-diff-port-653942
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a847583a5c44672beb47f4464de43eb
	  System UUID:                8a847583-a5c4-4672-beb4-7f4464de43eb
	  Boot ID:                    46ed85a2-6e5a-4b5c-9aa4-3746289b10c2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-5nnpv                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-76f75df574-zpnhs                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-default-k8s-diff-port-653942                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-653942             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-653942    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-mg5km                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-default-k8s-diff-port-653942             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-57f55c9bc5-6jn29                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  Starting                 9m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m26s (x8 over 9m26s)  kubelet          Node default-k8s-diff-port-653942 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m26s (x8 over 9m26s)  kubelet          Node default-k8s-diff-port-653942 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m26s (x7 over 9m26s)  kubelet          Node default-k8s-diff-port-653942 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m20s                  kubelet          Node default-k8s-diff-port-653942 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s                  kubelet          Node default-k8s-diff-port-653942 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s                  kubelet          Node default-k8s-diff-port-653942 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m19s                  kubelet          Node default-k8s-diff-port-653942 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m19s                  kubelet          Node default-k8s-diff-port-653942 status is now: NodeReady
	  Normal  RegisteredNode           9m7s                   node-controller  Node default-k8s-diff-port-653942 event: Registered Node default-k8s-diff-port-653942 in Controller
	
	
	==> dmesg <==
	[  +0.042127] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.810702] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.843816] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.721447] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.796486] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.059330] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070969] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.171029] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.148114] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.309523] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +4.813783] systemd-fstab-generator[811]: Ignoring "noauto" option for root device
	[  +0.061111] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.991682] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +5.617745] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.649493] kauditd_printk_skb: 79 callbacks suppressed
	[Apr16 01:01] kauditd_printk_skb: 2 callbacks suppressed
	[Apr16 01:05] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.989661] systemd-fstab-generator[3584]: Ignoring "noauto" option for root device
	[  +4.767032] kauditd_printk_skb: 56 callbacks suppressed
	[  +2.519695] systemd-fstab-generator[3912]: Ignoring "noauto" option for root device
	[Apr16 01:06] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.243119] systemd-fstab-generator[4215]: Ignoring "noauto" option for root device
	[Apr16 01:07] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [6cb8787026c8eadc21223e448145ec1f2032be0623aa0e3d20d4a5680f0d26fb] <==
	{"level":"info","ts":"2024-04-16T01:05:44.191543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4554daae9381f94 switched to configuration voters=(15300220705513676692)"}
	{"level":"info","ts":"2024-04-16T01:05:44.191666Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a1cf388ad59b0b48","local-member-id":"d4554daae9381f94","added-peer-id":"d4554daae9381f94","added-peer-peer-urls":["https://192.168.50.216:2380"]}
	{"level":"info","ts":"2024-04-16T01:05:44.234245Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-16T01:05:44.234477Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d4554daae9381f94","initial-advertise-peer-urls":["https://192.168.50.216:2380"],"listen-peer-urls":["https://192.168.50.216:2380"],"advertise-client-urls":["https://192.168.50.216:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.216:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T01:05:44.234539Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T01:05:44.234622Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.216:2380"}
	{"level":"info","ts":"2024-04-16T01:05:44.234656Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.216:2380"}
	{"level":"info","ts":"2024-04-16T01:05:45.14099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4554daae9381f94 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-16T01:05:45.141142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4554daae9381f94 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-16T01:05:45.141208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4554daae9381f94 received MsgPreVoteResp from d4554daae9381f94 at term 1"}
	{"level":"info","ts":"2024-04-16T01:05:45.141254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4554daae9381f94 became candidate at term 2"}
	{"level":"info","ts":"2024-04-16T01:05:45.141278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4554daae9381f94 received MsgVoteResp from d4554daae9381f94 at term 2"}
	{"level":"info","ts":"2024-04-16T01:05:45.141305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4554daae9381f94 became leader at term 2"}
	{"level":"info","ts":"2024-04-16T01:05:45.141331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4554daae9381f94 elected leader d4554daae9381f94 at term 2"}
	{"level":"info","ts":"2024-04-16T01:05:45.142908Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d4554daae9381f94","local-member-attributes":"{Name:default-k8s-diff-port-653942 ClientURLs:[https://192.168.50.216:2379]}","request-path":"/0/members/d4554daae9381f94/attributes","cluster-id":"a1cf388ad59b0b48","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T01:05:45.143137Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:05:45.143437Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T01:05:45.1439Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T01:05:45.145546Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T01:05:45.145625Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T01:05:45.156795Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T01:05:45.145659Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a1cf388ad59b0b48","local-member-id":"d4554daae9381f94","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:05:45.15704Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:05:45.157111Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:05:45.147133Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.216:2379"}
	
	
	==> kernel <==
	 01:15:09 up 14 min,  0 users,  load average: 0.22, 0.25, 0.20
	Linux default-k8s-diff-port-653942 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8e135f634e26f7cb1cb96c36197c8120329ba60121e04fcf857d389b138d5879] <==
	I0416 01:09:05.862661       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:10:46.540186       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:10:46.540287       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0416 01:10:47.541302       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:10:47.541375       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 01:10:47.541381       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:10:47.541430       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:10:47.541441       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 01:10:47.542600       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:11:47.542415       1 handler_proxy.go:93] no RequestInfo found in the context
	W0416 01:11:47.542696       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:11:47.542795       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 01:11:47.542845       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0416 01:11:47.542791       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 01:11:47.544594       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:13:47.543876       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:13:47.544367       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 01:13:47.544407       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:13:47.545210       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:13:47.545372       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 01:13:47.545475       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [d4ccaef892bf194c7025d3249065ddab09c8dd78f7be86a4b7b0aa63921817f7] <==
	W0416 01:05:34.695812       1 logging.go:59] [core] [Channel #199 SubChannel #200] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:34.752678       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:34.769653       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:34.775305       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:34.793993       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:34.838074       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:34.900339       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.017275       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.070679       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.079458       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.106085       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.222459       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.230706       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.279821       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.345946       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.395875       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.589005       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.664244       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.666626       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.717582       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.804119       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.983496       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:36.125700       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:36.193766       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:38.329485       1 logging.go:59] [core] [Channel #199 SubChannel #200] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [73dd87507a5ddd51947b5702eca37ec8a3c1d395ce443b1ca82fbd1d955329e6] <==
	I0416 01:09:33.360504       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:10:02.904959       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:10:03.370483       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:10:32.910624       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:10:33.378318       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:11:02.915992       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:11:03.387657       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:11:32.922208       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:11:33.400287       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0416 01:11:59.924461       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="319.001µs"
	E0416 01:12:02.928072       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:12:03.411381       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0416 01:12:11.918966       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="102.617µs"
	E0416 01:12:32.933339       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:12:33.418595       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:13:02.938100       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:13:03.426892       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:13:32.943176       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:13:33.436616       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:14:02.949247       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:14:03.446589       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:14:32.955383       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:14:33.456472       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:15:02.960603       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:15:03.464341       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [7ffc152b91a92e5cdc34e606f40d065686234148d7797e82423044fa41c461cd] <==
	I0416 01:06:03.771090       1 server_others.go:72] "Using iptables proxy"
	I0416 01:06:03.797866       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.216"]
	I0416 01:06:04.044949       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 01:06:04.044995       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 01:06:04.045014       1 server_others.go:168] "Using iptables Proxier"
	I0416 01:06:04.052635       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 01:06:04.052974       1 server.go:865] "Version info" version="v1.29.3"
	I0416 01:06:04.053009       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 01:06:04.073548       1 config.go:188] "Starting service config controller"
	I0416 01:06:04.073610       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 01:06:04.073639       1 config.go:97] "Starting endpoint slice config controller"
	I0416 01:06:04.073643       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 01:06:04.079137       1 config.go:315] "Starting node config controller"
	I0416 01:06:04.079185       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 01:06:04.174853       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 01:06:04.174922       1 shared_informer.go:318] Caches are synced for service config
	I0416 01:06:04.180343       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [790f3485688cb7e8ce2b6eaa4acfac3f546e15f3b3d3011fb7d3babb7e28d508] <==
	W0416 01:05:46.569587       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 01:05:46.569618       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 01:05:47.434344       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 01:05:47.434456       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 01:05:47.471390       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 01:05:47.471452       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 01:05:47.537150       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 01:05:47.537203       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 01:05:47.543626       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 01:05:47.543650       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 01:05:47.672283       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 01:05:47.672348       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 01:05:47.691142       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 01:05:47.691193       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 01:05:47.708970       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 01:05:47.709067       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0416 01:05:47.821929       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 01:05:47.821979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 01:05:47.838840       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 01:05:47.838905       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 01:05:47.839038       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 01:05:47.839103       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 01:05:47.892166       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 01:05:47.892258       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0416 01:05:50.756132       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 01:12:49 default-k8s-diff-port-653942 kubelet[3919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 01:12:49 default-k8s-diff-port-653942 kubelet[3919]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 01:12:49 default-k8s-diff-port-653942 kubelet[3919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 01:12:49 default-k8s-diff-port-653942 kubelet[3919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 01:12:54 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:12:54.903777    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:13:07 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:13:07.901802    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:13:19 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:13:19.905071    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:13:34 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:13:34.902148    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:13:45 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:13:45.901686    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:13:49 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:13:49.919930    3919 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 01:13:49 default-k8s-diff-port-653942 kubelet[3919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 01:13:49 default-k8s-diff-port-653942 kubelet[3919]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 01:13:49 default-k8s-diff-port-653942 kubelet[3919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 01:13:49 default-k8s-diff-port-653942 kubelet[3919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 01:14:00 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:14:00.902658    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:14:13 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:14:13.901840    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:14:27 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:14:27.902601    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:14:39 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:14:39.903148    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:14:49 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:14:49.917327    3919 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 01:14:49 default-k8s-diff-port-653942 kubelet[3919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 01:14:49 default-k8s-diff-port-653942 kubelet[3919]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 01:14:49 default-k8s-diff-port-653942 kubelet[3919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 01:14:49 default-k8s-diff-port-653942 kubelet[3919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 01:14:50 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:14:50.901565    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:15:02 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:15:02.903065    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	
	
	==> storage-provisioner [c4ddd594d7334d0604fd41c36bda70b56c09e95f393c080374978e5783c53f6d] <==
	I0416 01:06:05.701910       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 01:06:05.716027       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 01:06:05.716241       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 01:06:05.730425       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 01:06:05.730608       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f16da03d-fdc8-497a-a095-8aa7bb11d1c5", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-653942_26b7dafc-dcf7-4430-9baa-acde71280843 became leader
	I0416 01:06:05.730983       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-653942_26b7dafc-dcf7-4430-9baa-acde71280843!
	I0416 01:06:05.831947       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-653942_26b7dafc-dcf7-4430-9baa-acde71280843!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-653942 -n default-k8s-diff-port-653942
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-653942 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-6jn29
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-653942 describe pod metrics-server-57f55c9bc5-6jn29
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-653942 describe pod metrics-server-57f55c9bc5-6jn29: exit status 1 (63.720677ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-6jn29" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-653942 describe pod metrics-server-57f55c9bc5-6jn29: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
E0416 01:08:58.679837   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
E0416 01:12:20.169269   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
E0416 01:13:58.680673   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
E0416 01:15:23.220315   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-800769 -n old-k8s-version-800769
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-800769 -n old-k8s-version-800769: exit status 2 (243.064195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-800769" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-800769 -n old-k8s-version-800769
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-800769 -n old-k8s-version-800769: exit status 2 (245.361062ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-800769 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-800769 logs -n 25: (1.553750969s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p cert-expiration-359535                              | cert-expiration-359535       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:52 UTC | 16 Apr 24 00:52 UTC |
	| start   | -p newest-cni-012509 --memory=2200 --alsologtostderr   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:52 UTC | 16 Apr 24 00:53 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p newest-cni-012509             | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-012509                  | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-012509 --memory=2200 --alsologtostderr   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:54 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| image   | newest-cni-012509 image list                           | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --format=json                                          |                              |         |                |                     |                     |
	| pause   | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	| delete  | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	| delete  | -p                                                     | disable-driver-mounts-988802 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | disable-driver-mounts-988802                           |                              |         |                |                     |                     |
	| start   | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653942       | default-k8s-diff-port-653942 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-572602                  | no-preload-572602            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653942 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 01:06 UTC |
	|         | default-k8s-diff-port-653942                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-800769        | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| start   | -p no-preload-572602                                   | no-preload-572602            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 01:05 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-617092            | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-800769                              | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-800769             | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-800769                              | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-617092                 | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:58 UTC | 16 Apr 24 01:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 00:58:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 00:58:42.797832   62747 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:58:42.797983   62747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:58:42.797994   62747 out.go:304] Setting ErrFile to fd 2...
	I0416 00:58:42.797998   62747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:58:42.798182   62747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:58:42.798686   62747 out.go:298] Setting JSON to false
	I0416 00:58:42.799629   62747 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6067,"bootTime":1713223056,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 00:58:42.799687   62747 start.go:139] virtualization: kvm guest
	I0416 00:58:42.801878   62747 out.go:177] * [embed-certs-617092] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 00:58:42.803202   62747 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 00:58:42.804389   62747 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 00:58:42.803288   62747 notify.go:220] Checking for updates...
	I0416 00:58:42.805742   62747 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 00:58:42.807023   62747 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:58:42.808185   62747 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 00:58:42.809402   62747 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 00:58:42.811188   62747 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:58:42.811772   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:58:42.811833   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:58:42.826377   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44973
	I0416 00:58:42.826730   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:58:42.827217   62747 main.go:141] libmachine: Using API Version  1
	I0416 00:58:42.827233   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:58:42.827541   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:58:42.827737   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 00:58:42.827964   62747 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 00:58:42.828239   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:58:42.828274   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:58:42.842499   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
	I0416 00:58:42.842872   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:58:42.843283   62747 main.go:141] libmachine: Using API Version  1
	I0416 00:58:42.843300   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:58:42.843636   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:58:42.843830   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 00:58:42.874583   62747 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 00:58:42.875910   62747 start.go:297] selected driver: kvm2
	I0416 00:58:42.875933   62747 start.go:901] validating driver "kvm2" against &{Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:58:42.876072   62747 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 00:58:42.876741   62747 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:58:42.876826   62747 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 00:58:42.890834   62747 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 00:58:42.891212   62747 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 00:58:42.891270   62747 cni.go:84] Creating CNI manager for ""
	I0416 00:58:42.891283   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:58:42.891314   62747 start.go:340] cluster config:
	{Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-617092 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:58:42.891412   62747 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:58:42.893179   62747 out.go:177] * Starting "embed-certs-617092" primary control-plane node in "embed-certs-617092" cluster
	I0416 00:58:42.894232   62747 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 00:58:42.894260   62747 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 00:58:42.894267   62747 cache.go:56] Caching tarball of preloaded images
	I0416 00:58:42.894353   62747 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 00:58:42.894365   62747 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 00:58:42.894458   62747 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/config.json ...
	I0416 00:58:42.894628   62747 start.go:360] acquireMachinesLock for embed-certs-617092: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:58:47.545405   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:58:50.617454   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:58:56.697459   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:58:59.769461   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:05.849462   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:08.921459   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:15.001430   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:21.078070   61500 start.go:364] duration metric: took 4m33.431027521s to acquireMachinesLock for "no-preload-572602"
	I0416 00:59:21.078134   61500 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:59:21.078152   61500 fix.go:54] fixHost starting: 
	I0416 00:59:21.078760   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:59:21.078809   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:59:21.093476   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36767
	I0416 00:59:21.093934   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:59:21.094422   61500 main.go:141] libmachine: Using API Version  1
	I0416 00:59:21.094448   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:59:21.094749   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:59:21.094902   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:21.095048   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 00:59:21.096678   61500 fix.go:112] recreateIfNeeded on no-preload-572602: state=Stopped err=<nil>
	I0416 00:59:21.096697   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	W0416 00:59:21.096846   61500 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:59:21.098527   61500 out.go:177] * Restarting existing kvm2 VM for "no-preload-572602" ...
	I0416 00:59:18.073453   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:21.075633   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:59:21.075671   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 00:59:21.075991   61267 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653942"
	I0416 00:59:21.076014   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 00:59:21.076225   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 00:59:21.077923   61267 machine.go:97] duration metric: took 4m34.542024225s to provisionDockerMachine
	I0416 00:59:21.077967   61267 fix.go:56] duration metric: took 4m34.567596715s for fixHost
	I0416 00:59:21.077978   61267 start.go:83] releasing machines lock for "default-k8s-diff-port-653942", held for 4m34.567645643s
	W0416 00:59:21.078001   61267 start.go:713] error starting host: provision: host is not running
	W0416 00:59:21.078088   61267 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0416 00:59:21.078097   61267 start.go:728] Will try again in 5 seconds ...
	I0416 00:59:21.099788   61500 main.go:141] libmachine: (no-preload-572602) Calling .Start
	I0416 00:59:21.099966   61500 main.go:141] libmachine: (no-preload-572602) Ensuring networks are active...
	I0416 00:59:21.100656   61500 main.go:141] libmachine: (no-preload-572602) Ensuring network default is active
	I0416 00:59:21.100937   61500 main.go:141] libmachine: (no-preload-572602) Ensuring network mk-no-preload-572602 is active
	I0416 00:59:21.101282   61500 main.go:141] libmachine: (no-preload-572602) Getting domain xml...
	I0416 00:59:21.101905   61500 main.go:141] libmachine: (no-preload-572602) Creating domain...
	I0416 00:59:22.294019   61500 main.go:141] libmachine: (no-preload-572602) Waiting to get IP...
	I0416 00:59:22.294922   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:22.295294   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:22.295349   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:22.295262   62936 retry.go:31] will retry after 220.952312ms: waiting for machine to come up
	I0416 00:59:22.517753   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:22.518334   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:22.518358   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:22.518287   62936 retry.go:31] will retry after 377.547009ms: waiting for machine to come up
	I0416 00:59:26.081716   61267 start.go:360] acquireMachinesLock for default-k8s-diff-port-653942: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:59:22.897924   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:22.898442   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:22.898465   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:22.898394   62936 retry.go:31] will retry after 450.415086ms: waiting for machine to come up
	I0416 00:59:23.349893   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:23.350383   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:23.350420   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:23.350333   62936 retry.go:31] will retry after 385.340718ms: waiting for machine to come up
	I0416 00:59:23.736854   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:23.737225   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:23.737262   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:23.737205   62936 retry.go:31] will retry after 696.175991ms: waiting for machine to come up
	I0416 00:59:24.435231   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:24.435587   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:24.435616   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:24.435557   62936 retry.go:31] will retry after 644.402152ms: waiting for machine to come up
	I0416 00:59:25.081355   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:25.081660   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:25.081697   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:25.081626   62936 retry.go:31] will retry after 809.585997ms: waiting for machine to come up
	I0416 00:59:25.892402   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:25.892767   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:25.892797   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:25.892722   62936 retry.go:31] will retry after 1.07477705s: waiting for machine to come up
	I0416 00:59:26.969227   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:26.969617   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:26.969646   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:26.969561   62936 retry.go:31] will retry after 1.243937595s: waiting for machine to come up
	I0416 00:59:28.214995   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:28.215412   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:28.215433   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:28.215364   62936 retry.go:31] will retry after 1.775188434s: waiting for machine to come up
	I0416 00:59:29.993420   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:29.993825   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:29.993853   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:29.993779   62936 retry.go:31] will retry after 2.73873778s: waiting for machine to come up
	I0416 00:59:32.735350   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:32.735758   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:32.735809   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:32.735721   62936 retry.go:31] will retry after 2.208871896s: waiting for machine to come up
	I0416 00:59:34.947005   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:34.947400   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:34.947431   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:34.947358   62936 retry.go:31] will retry after 4.484880009s: waiting for machine to come up
	I0416 00:59:40.669954   62139 start.go:364] duration metric: took 3m18.466569456s to acquireMachinesLock for "old-k8s-version-800769"
	I0416 00:59:40.670015   62139 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:59:40.670038   62139 fix.go:54] fixHost starting: 
	I0416 00:59:40.670411   62139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:59:40.670448   62139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:59:40.686269   62139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39043
	I0416 00:59:40.686633   62139 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:59:40.687125   62139 main.go:141] libmachine: Using API Version  1
	I0416 00:59:40.687162   62139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:59:40.687481   62139 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:59:40.687672   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:40.687838   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetState
	I0416 00:59:40.689108   62139 fix.go:112] recreateIfNeeded on old-k8s-version-800769: state=Stopped err=<nil>
	I0416 00:59:40.689132   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	W0416 00:59:40.689286   62139 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:59:40.691869   62139 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-800769" ...
	I0416 00:59:40.693292   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .Start
	I0416 00:59:40.693450   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring networks are active...
	I0416 00:59:40.694152   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring network default is active
	I0416 00:59:40.694457   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring network mk-old-k8s-version-800769 is active
	I0416 00:59:40.694883   62139 main.go:141] libmachine: (old-k8s-version-800769) Getting domain xml...
	I0416 00:59:40.695720   62139 main.go:141] libmachine: (old-k8s-version-800769) Creating domain...
	I0416 00:59:41.913001   62139 main.go:141] libmachine: (old-k8s-version-800769) Waiting to get IP...
	I0416 00:59:41.913874   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:41.914260   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:41.914318   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:41.914237   63071 retry.go:31] will retry after 261.032707ms: waiting for machine to come up
	I0416 00:59:39.436244   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.436664   61500 main.go:141] libmachine: (no-preload-572602) Found IP for machine: 192.168.39.121
	I0416 00:59:39.436686   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has current primary IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.436694   61500 main.go:141] libmachine: (no-preload-572602) Reserving static IP address...
	I0416 00:59:39.437114   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "no-preload-572602", mac: "52:54:00:fb:a5:f3", ip: "192.168.39.121"} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.437151   61500 main.go:141] libmachine: (no-preload-572602) Reserved static IP address: 192.168.39.121
	I0416 00:59:39.437183   61500 main.go:141] libmachine: (no-preload-572602) DBG | skip adding static IP to network mk-no-preload-572602 - found existing host DHCP lease matching {name: "no-preload-572602", mac: "52:54:00:fb:a5:f3", ip: "192.168.39.121"}
	I0416 00:59:39.437197   61500 main.go:141] libmachine: (no-preload-572602) Waiting for SSH to be available...
	I0416 00:59:39.437215   61500 main.go:141] libmachine: (no-preload-572602) DBG | Getting to WaitForSSH function...
	I0416 00:59:39.439255   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.439613   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.439642   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.439723   61500 main.go:141] libmachine: (no-preload-572602) DBG | Using SSH client type: external
	I0416 00:59:39.439756   61500 main.go:141] libmachine: (no-preload-572602) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa (-rw-------)
	I0416 00:59:39.439799   61500 main.go:141] libmachine: (no-preload-572602) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.121 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 00:59:39.439822   61500 main.go:141] libmachine: (no-preload-572602) DBG | About to run SSH command:
	I0416 00:59:39.439835   61500 main.go:141] libmachine: (no-preload-572602) DBG | exit 0
	I0416 00:59:39.565190   61500 main.go:141] libmachine: (no-preload-572602) DBG | SSH cmd err, output: <nil>: 
	I0416 00:59:39.565584   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetConfigRaw
	I0416 00:59:39.566223   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:39.568572   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.568869   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.568906   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.569083   61500 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/config.json ...
	I0416 00:59:39.569300   61500 machine.go:94] provisionDockerMachine start ...
	I0416 00:59:39.569318   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:39.569526   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.571536   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.571842   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.571868   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.572004   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:39.572189   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.572352   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.572505   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:39.572751   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:39.572974   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:39.572991   61500 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:59:39.681544   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 00:59:39.681574   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetMachineName
	I0416 00:59:39.681845   61500 buildroot.go:166] provisioning hostname "no-preload-572602"
	I0416 00:59:39.681874   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetMachineName
	I0416 00:59:39.682088   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.684694   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.685029   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.685063   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.685259   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:39.685453   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.685608   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.685737   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:39.685887   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:39.686066   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:39.686090   61500 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-572602 && echo "no-preload-572602" | sudo tee /etc/hostname
	I0416 00:59:39.804124   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-572602
	
	I0416 00:59:39.804149   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.807081   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.807447   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.807480   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.807651   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:39.807860   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.808048   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.808202   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:39.808393   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:39.808618   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:39.808644   61500 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-572602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-572602/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-572602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:59:39.921781   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:59:39.921824   61500 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:59:39.921847   61500 buildroot.go:174] setting up certificates
	I0416 00:59:39.921857   61500 provision.go:84] configureAuth start
	I0416 00:59:39.921872   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetMachineName
	I0416 00:59:39.922150   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:39.924726   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.925052   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.925081   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.925199   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.927315   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.927820   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.927869   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.927934   61500 provision.go:143] copyHostCerts
	I0416 00:59:39.928005   61500 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:59:39.928031   61500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:59:39.928122   61500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:59:39.928231   61500 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:59:39.928241   61500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:59:39.928284   61500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:59:39.928370   61500 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:59:39.928379   61500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:59:39.928428   61500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:59:39.928498   61500 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.no-preload-572602 san=[127.0.0.1 192.168.39.121 localhost minikube no-preload-572602]
	I0416 00:59:40.000129   61500 provision.go:177] copyRemoteCerts
	I0416 00:59:40.000200   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:59:40.000236   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.002726   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.003028   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.003057   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.003168   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.003351   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.003471   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.003577   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.087468   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:59:40.115336   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0416 00:59:40.142695   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 00:59:40.169631   61500 provision.go:87] duration metric: took 247.759459ms to configureAuth
	I0416 00:59:40.169657   61500 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:59:40.169824   61500 config.go:182] Loaded profile config "no-preload-572602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 00:59:40.169906   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.172164   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.172503   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.172531   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.172689   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.172875   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.173033   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.173182   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.173311   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:40.173465   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:40.173480   61500 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:59:40.437143   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:59:40.437182   61500 machine.go:97] duration metric: took 867.868152ms to provisionDockerMachine
	I0416 00:59:40.437194   61500 start.go:293] postStartSetup for "no-preload-572602" (driver="kvm2")
	I0416 00:59:40.437211   61500 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:59:40.437233   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.437536   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:59:40.437564   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.440246   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.440596   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.440637   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.440759   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.440981   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.441186   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.441319   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.524157   61500 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:59:40.528556   61500 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:59:40.528580   61500 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:59:40.528647   61500 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:59:40.528756   61500 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:59:40.528877   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:59:40.538275   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:59:40.562693   61500 start.go:296] duration metric: took 125.48438ms for postStartSetup
	I0416 00:59:40.562728   61500 fix.go:56] duration metric: took 19.484586221s for fixHost
	I0416 00:59:40.562746   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.565410   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.565717   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.565756   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.565920   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.566103   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.566269   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.566438   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.566587   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:40.566738   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:40.566749   61500 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 00:59:40.669778   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229180.641382554
	
	I0416 00:59:40.669802   61500 fix.go:216] guest clock: 1713229180.641382554
	I0416 00:59:40.669811   61500 fix.go:229] Guest: 2024-04-16 00:59:40.641382554 +0000 UTC Remote: 2024-04-16 00:59:40.56273146 +0000 UTC m=+293.069651959 (delta=78.651094ms)
	I0416 00:59:40.669839   61500 fix.go:200] guest clock delta is within tolerance: 78.651094ms
	I0416 00:59:40.669857   61500 start.go:83] releasing machines lock for "no-preload-572602", held for 19.591740017s
	I0416 00:59:40.669883   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.670163   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:40.672800   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.673187   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.673234   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.673386   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.673841   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.673993   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.674067   61500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:59:40.674115   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.674155   61500 ssh_runner.go:195] Run: cat /version.json
	I0416 00:59:40.674174   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.676617   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.676776   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.677006   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.677030   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.677126   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.677277   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.677299   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.677336   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.677499   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.677511   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.677635   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.677768   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.678072   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.678224   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.787049   61500 ssh_runner.go:195] Run: systemctl --version
	I0416 00:59:40.793568   61500 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:59:40.941445   61500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 00:59:40.949062   61500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:59:40.949177   61500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:59:40.966425   61500 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 00:59:40.966454   61500 start.go:494] detecting cgroup driver to use...
	I0416 00:59:40.966525   61500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:59:40.985126   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:59:40.999931   61500 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:59:41.000004   61500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:59:41.015597   61500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:59:41.030610   61500 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:59:41.151240   61500 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 00:59:41.312384   61500 docker.go:233] disabling docker service ...
	I0416 00:59:41.312464   61500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 00:59:41.329263   61500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 00:59:41.345192   61500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 00:59:41.463330   61500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 00:59:41.595259   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 00:59:41.610495   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 00:59:41.632527   61500 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 00:59:41.632580   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.644625   61500 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 00:59:41.644723   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.656056   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.667069   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.682783   61500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 00:59:41.694760   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.712505   61500 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.737338   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.747518   61500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 00:59:41.756586   61500 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 00:59:41.756656   61500 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 00:59:41.769230   61500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 00:59:41.778424   61500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:59:41.894135   61500 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 00:59:42.039732   61500 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 00:59:42.039812   61500 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 00:59:42.044505   61500 start.go:562] Will wait 60s for crictl version
	I0416 00:59:42.044551   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.049632   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 00:59:42.106886   61500 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 00:59:42.106981   61500 ssh_runner.go:195] Run: crio --version
	I0416 00:59:42.137092   61500 ssh_runner.go:195] Run: crio --version
	I0416 00:59:42.170036   61500 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0416 00:59:42.171395   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:42.174790   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:42.175217   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:42.175250   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:42.175506   61500 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 00:59:42.180987   61500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:59:42.198472   61500 kubeadm.go:877] updating cluster {Name:no-preload-572602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.2 ClusterName:no-preload-572602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 00:59:42.198595   61500 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0416 00:59:42.198639   61500 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:59:42.236057   61500 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.2". assuming images are not preloaded.
	I0416 00:59:42.236084   61500 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.2 registry.k8s.io/kube-controller-manager:v1.30.0-rc.2 registry.k8s.io/kube-scheduler:v1.30.0-rc.2 registry.k8s.io/kube-proxy:v1.30.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 00:59:42.236146   61500 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:42.236166   61500 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.236180   61500 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.236182   61500 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.236212   61500 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.236238   61500 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0416 00:59:42.236287   61500 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.236164   61500 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.237740   61500 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.237756   61500 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0416 00:59:42.237763   61500 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.237779   61500 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.237740   61500 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.237848   61500 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.237847   61500 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:42.238087   61500 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.410682   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.445824   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.446874   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0416 00:59:42.448854   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.449450   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.452121   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.458966   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.480556   61500 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.2" does not exist at hash "461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6" in container runtime
	I0416 00:59:42.480608   61500 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.480670   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.176660   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.177053   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.177084   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.177031   63071 retry.go:31] will retry after 268.951362ms: waiting for machine to come up
	I0416 00:59:42.447724   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.448132   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.448159   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.448097   63071 retry.go:31] will retry after 293.793417ms: waiting for machine to come up
	I0416 00:59:42.743375   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.743845   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.743874   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.743801   63071 retry.go:31] will retry after 494.163372ms: waiting for machine to come up
	I0416 00:59:43.239314   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:43.239761   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:43.239790   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:43.239708   63071 retry.go:31] will retry after 698.851999ms: waiting for machine to come up
	I0416 00:59:43.939998   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:43.940577   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:43.940607   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:43.940535   63071 retry.go:31] will retry after 764.693004ms: waiting for machine to come up
	I0416 00:59:44.706335   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:44.706673   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:44.706724   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:44.706626   63071 retry.go:31] will retry after 874.082115ms: waiting for machine to come up
	I0416 00:59:45.581896   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:45.582331   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:45.582361   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:45.582280   63071 retry.go:31] will retry after 966.259345ms: waiting for machine to come up
	I0416 00:59:46.550671   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:46.551111   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:46.551140   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:46.551062   63071 retry.go:31] will retry after 1.191034468s: waiting for machine to come up
	I0416 00:59:42.583284   61500 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0416 00:59:42.583332   61500 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.583377   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.724785   61500 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2" does not exist at hash "ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b" in container runtime
	I0416 00:59:42.724827   61500 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.724878   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.724899   61500 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0416 00:59:42.724938   61500 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.724938   61500 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.2" does not exist at hash "35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e" in container runtime
	I0416 00:59:42.724964   61500 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.724979   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.724993   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.725019   61500 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.2" does not exist at hash "65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1" in container runtime
	I0416 00:59:42.725051   61500 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.725063   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.725088   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.725102   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.739346   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.739764   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.787888   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.787977   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.788024   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.788084   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.815167   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0416 00:59:42.815274   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0416 00:59:42.845627   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0416 00:59:42.845741   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0416 00:59:42.848065   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:42.848134   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:42.880543   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:42.880557   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2 (exists)
	I0416 00:59:42.880575   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.880628   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.880648   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:42.907207   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0416 00:59:42.907245   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0416 00:59:42.907269   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.2
	I0416 00:59:42.907295   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2 (exists)
	I0416 00:59:42.907334   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2 (exists)
	I0416 00:59:42.907350   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 00:59:43.138705   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:44.951278   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2: (2.07061835s)
	I0416 00:59:44.951295   61500 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2: (2.04392036s)
	I0416 00:59:44.951348   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2 (exists)
	I0416 00:59:44.951309   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2 from cache
	I0416 00:59:44.951364   61500 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.812619758s)
	I0416 00:59:44.951410   61500 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0416 00:59:44.951448   61500 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:44.951374   61500 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0416 00:59:44.951506   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:44.951508   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0416 00:59:47.744187   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:47.744683   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:47.744712   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:47.744637   63071 retry.go:31] will retry after 2.263605663s: waiting for machine to come up
	I0416 00:59:50.011136   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:50.011605   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:50.011632   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:50.011566   63071 retry.go:31] will retry after 2.648982849s: waiting for machine to come up
	I0416 00:59:48.656623   61500 ssh_runner.go:235] Completed: which crictl: (3.705085257s)
	I0416 00:59:48.656705   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:48.656715   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.705109475s)
	I0416 00:59:48.656743   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0416 00:59:48.656769   61500 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0416 00:59:48.656798   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0416 00:59:50.560030   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.903209359s)
	I0416 00:59:50.560071   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0416 00:59:50.560085   61500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.90335887s)
	I0416 00:59:50.560096   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:50.560148   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:50.560151   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0416 00:59:50.560309   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0416 00:59:52.662443   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:52.662852   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:52.662883   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:52.662815   63071 retry.go:31] will retry after 2.183508059s: waiting for machine to come up
	I0416 00:59:54.849225   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:54.849701   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:54.849734   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:54.849649   63071 retry.go:31] will retry after 3.201585234s: waiting for machine to come up
	I0416 00:59:52.739620   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2: (2.179436189s)
	I0416 00:59:52.739658   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2 from cache
	I0416 00:59:52.739688   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:52.739697   61500 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.179365348s)
	I0416 00:59:52.739724   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0416 00:59:52.739747   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:55.098350   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2: (2.358579586s)
	I0416 00:59:55.098381   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2 from cache
	I0416 00:59:55.098408   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 00:59:55.098454   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 00:59:57.166586   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2: (2.068105529s)
	I0416 00:59:57.166615   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.2 from cache
	I0416 00:59:57.166644   61500 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0416 00:59:57.166697   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0416 00:59:59.394339   62747 start.go:364] duration metric: took 1m16.499681915s to acquireMachinesLock for "embed-certs-617092"
	I0416 00:59:59.394389   62747 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:59:59.394412   62747 fix.go:54] fixHost starting: 
	I0416 00:59:59.394834   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:59:59.394896   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:59:59.414712   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38637
	I0416 00:59:59.415464   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:59:59.416123   62747 main.go:141] libmachine: Using API Version  1
	I0416 00:59:59.416150   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:59:59.416436   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:59:59.416623   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 00:59:59.416786   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 00:59:59.418413   62747 fix.go:112] recreateIfNeeded on embed-certs-617092: state=Stopped err=<nil>
	I0416 00:59:59.418449   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	W0416 00:59:59.418609   62747 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:59:59.420560   62747 out.go:177] * Restarting existing kvm2 VM for "embed-certs-617092" ...
	I0416 00:59:58.052613   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.053048   62139 main.go:141] libmachine: (old-k8s-version-800769) Found IP for machine: 192.168.83.98
	I0416 00:59:58.053073   62139 main.go:141] libmachine: (old-k8s-version-800769) Reserving static IP address...
	I0416 00:59:58.053089   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has current primary IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.053517   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "old-k8s-version-800769", mac: "52:54:00:a1:ad:da", ip: "192.168.83.98"} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.053549   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | skip adding static IP to network mk-old-k8s-version-800769 - found existing host DHCP lease matching {name: "old-k8s-version-800769", mac: "52:54:00:a1:ad:da", ip: "192.168.83.98"}
	I0416 00:59:58.053569   62139 main.go:141] libmachine: (old-k8s-version-800769) Reserved static IP address: 192.168.83.98
	I0416 00:59:58.053587   62139 main.go:141] libmachine: (old-k8s-version-800769) Waiting for SSH to be available...
	I0416 00:59:58.053602   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Getting to WaitForSSH function...
	I0416 00:59:58.055598   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.055907   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.055941   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.056038   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Using SSH client type: external
	I0416 00:59:58.056088   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa (-rw-------)
	I0416 00:59:58.056132   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.98 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 00:59:58.056149   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | About to run SSH command:
	I0416 00:59:58.056162   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | exit 0
	I0416 00:59:58.185675   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | SSH cmd err, output: <nil>: 
	I0416 00:59:58.186055   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetConfigRaw
	I0416 00:59:58.186802   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:58.189772   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.190219   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.190257   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.190448   62139 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/config.json ...
	I0416 00:59:58.190666   62139 machine.go:94] provisionDockerMachine start ...
	I0416 00:59:58.190685   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:58.190902   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.193570   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.193954   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.193982   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.194139   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.194337   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.194492   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.194636   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.194786   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.195041   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.195056   62139 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:59:58.321824   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 00:59:58.321857   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.322146   62139 buildroot.go:166] provisioning hostname "old-k8s-version-800769"
	I0416 00:59:58.322175   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.322381   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.324941   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.325288   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.325316   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.325423   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.325613   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.325776   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.325936   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.326109   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.326322   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.326339   62139 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-800769 && echo "old-k8s-version-800769" | sudo tee /etc/hostname
	I0416 00:59:58.455194   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-800769
	
	I0416 00:59:58.455236   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.458021   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.458423   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.458458   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.458662   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.458848   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.459013   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.459162   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.459353   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.459507   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.459524   62139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-800769' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-800769/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-800769' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:59:58.587318   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:59:58.587351   62139 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:59:58.587391   62139 buildroot.go:174] setting up certificates
	I0416 00:59:58.587400   62139 provision.go:84] configureAuth start
	I0416 00:59:58.587413   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.587686   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:58.590415   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.590739   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.590778   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.590880   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.593282   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.593728   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.593759   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.593931   62139 provision.go:143] copyHostCerts
	I0416 00:59:58.593988   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:59:58.594007   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:59:58.594079   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:59:58.594213   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:59:58.594222   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:59:58.594263   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:59:58.594372   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:59:58.594383   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:59:58.594408   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:59:58.594470   62139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-800769 san=[127.0.0.1 192.168.83.98 localhost minikube old-k8s-version-800769]
	I0416 00:59:58.692127   62139 provision.go:177] copyRemoteCerts
	I0416 00:59:58.692197   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:59:58.692232   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.694858   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.695231   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.695278   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.695507   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.695693   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.695852   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.695994   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:58.783458   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:59:58.811124   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0416 00:59:58.836495   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 00:59:58.862044   62139 provision.go:87] duration metric: took 274.632117ms to configureAuth
	I0416 00:59:58.862068   62139 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:59:58.862278   62139 config.go:182] Loaded profile config "old-k8s-version-800769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0416 00:59:58.862361   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.865352   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.865795   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.865829   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.866043   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.866228   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.866435   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.866625   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.866805   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.867008   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.867026   62139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:59:59.143874   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:59:59.143900   62139 machine.go:97] duration metric: took 953.218972ms to provisionDockerMachine
	I0416 00:59:59.143914   62139 start.go:293] postStartSetup for "old-k8s-version-800769" (driver="kvm2")
	I0416 00:59:59.143927   62139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:59:59.143972   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.144277   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:59:59.144302   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.147021   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.147355   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.147385   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.147649   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.147871   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.148036   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.148174   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.236981   62139 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:59:59.241388   62139 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:59:59.241411   62139 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:59:59.241469   62139 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:59:59.241534   62139 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:59:59.241619   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:59:59.251688   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:59:59.275189   62139 start.go:296] duration metric: took 131.262042ms for postStartSetup
	I0416 00:59:59.275227   62139 fix.go:56] duration metric: took 18.605201288s for fixHost
	I0416 00:59:59.275250   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.277804   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.278153   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.278186   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.278341   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.278581   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.278741   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.278908   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.279068   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:59.279233   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:59.279243   62139 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 00:59:59.394108   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229199.360202150
	
	I0416 00:59:59.394141   62139 fix.go:216] guest clock: 1713229199.360202150
	I0416 00:59:59.394152   62139 fix.go:229] Guest: 2024-04-16 00:59:59.36020215 +0000 UTC Remote: 2024-04-16 00:59:59.27523174 +0000 UTC m=+217.222314955 (delta=84.97041ms)
	I0416 00:59:59.394211   62139 fix.go:200] guest clock delta is within tolerance: 84.97041ms
	I0416 00:59:59.394218   62139 start.go:83] releasing machines lock for "old-k8s-version-800769", held for 18.724230851s
	I0416 00:59:59.394252   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.394554   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:59.397241   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.397670   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.397703   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.397897   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398460   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398650   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398740   62139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:59:59.398782   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.399049   62139 ssh_runner.go:195] Run: cat /version.json
	I0416 00:59:59.399072   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.401397   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401656   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401802   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.401825   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401964   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.402017   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.402089   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.402173   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.402248   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.402320   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.402376   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.402430   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.402577   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.402638   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.481834   62139 ssh_runner.go:195] Run: systemctl --version
	I0416 00:59:59.516372   62139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:59:59.666722   62139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 00:59:59.674165   62139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:59:59.674226   62139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:59:59.695545   62139 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 00:59:59.695573   62139 start.go:494] detecting cgroup driver to use...
	I0416 00:59:59.695646   62139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:59:59.715091   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:59:59.732004   62139 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:59:59.732060   62139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:59:59.753217   62139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:59:59.768513   62139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:59:59.898693   62139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:00:00.066535   62139 docker.go:233] disabling docker service ...
	I0416 01:00:00.066607   62139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:00:00.084512   62139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:00:00.097714   62139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:00:00.232901   62139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:00:00.378379   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:00:00.395191   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:00:00.416631   62139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0416 01:00:00.416695   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.428712   62139 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:00:00.428774   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.442687   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.454631   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.466151   62139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:00:00.478459   62139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:00:00.489957   62139 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:00:00.490035   62139 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:00:00.506087   62139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:00:00.518100   62139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:00.676317   62139 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:00:00.869766   62139 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:00:00.869855   62139 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:00:00.875363   62139 start.go:562] Will wait 60s for crictl version
	I0416 01:00:00.875424   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:00.880947   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:00:00.924780   62139 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:00:00.924852   62139 ssh_runner.go:195] Run: crio --version
	I0416 01:00:00.958390   62139 ssh_runner.go:195] Run: crio --version
	I0416 01:00:00.993114   62139 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0416 01:00:00.994513   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 01:00:00.997571   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 01:00:00.998032   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 01:00:00.998065   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 01:00:00.998273   62139 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0416 01:00:01.002750   62139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:01.015709   62139 kubeadm.go:877] updating cluster {Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.98 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:00:01.015810   62139 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 01:00:01.015853   62139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:01.063257   62139 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 01:00:01.063331   62139 ssh_runner.go:195] Run: which lz4
	I0416 01:00:01.067973   62139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:00:01.072369   62139 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:00:01.072400   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0416 00:59:57.817013   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0416 00:59:57.817060   61500 cache_images.go:123] Successfully loaded all cached images
	I0416 00:59:57.817073   61500 cache_images.go:92] duration metric: took 15.580967615s to LoadCachedImages
	I0416 00:59:57.817087   61500 kubeadm.go:928] updating node { 192.168.39.121 8443 v1.30.0-rc.2 crio true true} ...
	I0416 00:59:57.817241   61500 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-572602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:no-preload-572602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 00:59:57.817324   61500 ssh_runner.go:195] Run: crio config
	I0416 00:59:57.866116   61500 cni.go:84] Creating CNI manager for ""
	I0416 00:59:57.866140   61500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:59:57.866154   61500 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 00:59:57.866189   61500 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.121 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-572602 NodeName:no-preload-572602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.121 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 00:59:57.866325   61500 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.121
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-572602"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.121
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.121"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 00:59:57.866390   61500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0416 00:59:57.876619   61500 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 00:59:57.876689   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 00:59:57.886472   61500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0416 00:59:57.903172   61500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0416 00:59:57.919531   61500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0416 00:59:57.936394   61500 ssh_runner.go:195] Run: grep 192.168.39.121	control-plane.minikube.internal$ /etc/hosts
	I0416 00:59:57.940161   61500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.121	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:59:57.951997   61500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:59:58.089553   61500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 00:59:58.117870   61500 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602 for IP: 192.168.39.121
	I0416 00:59:58.117926   61500 certs.go:194] generating shared ca certs ...
	I0416 00:59:58.117949   61500 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:59:58.118136   61500 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 00:59:58.118199   61500 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 00:59:58.118216   61500 certs.go:256] generating profile certs ...
	I0416 00:59:58.118351   61500 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/client.key
	I0416 00:59:58.118446   61500 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/apiserver.key.a3b1330f
	I0416 00:59:58.118505   61500 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/proxy-client.key
	I0416 00:59:58.118664   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 00:59:58.118708   61500 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 00:59:58.118721   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 00:59:58.118756   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 00:59:58.118786   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 00:59:58.118814   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 00:59:58.118874   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:59:58.119738   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 00:59:58.150797   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 00:59:58.181693   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 00:59:58.231332   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 00:59:58.276528   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 00:59:58.301000   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 00:59:58.326090   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 00:59:58.350254   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 00:59:58.377597   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 00:59:58.401548   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 00:59:58.425237   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 00:59:58.449748   61500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 00:59:58.468346   61500 ssh_runner.go:195] Run: openssl version
	I0416 00:59:58.474164   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 00:59:58.485674   61500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 00:59:58.490136   61500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 00:59:58.490203   61500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 00:59:58.495781   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 00:59:58.507047   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 00:59:58.518007   61500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 00:59:58.522317   61500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 00:59:58.522364   61500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 00:59:58.527809   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 00:59:58.538579   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 00:59:58.549188   61500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:59:58.553688   61500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:59:58.553732   61500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:59:58.559175   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 00:59:58.570142   61500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 00:59:58.574657   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 00:59:58.580560   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 00:59:58.586319   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 00:59:58.593938   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 00:59:58.599808   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 00:59:58.605583   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 00:59:58.611301   61500 kubeadm.go:391] StartCluster: {Name:no-preload-572602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.2 ClusterName:no-preload-572602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:59:58.611385   61500 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 00:59:58.611439   61500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:59:58.655244   61500 cri.go:89] found id: ""
	I0416 00:59:58.655315   61500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 00:59:58.667067   61500 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 00:59:58.667082   61500 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 00:59:58.667088   61500 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 00:59:58.667128   61500 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 00:59:58.678615   61500 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:59:58.680097   61500 kubeconfig.go:125] found "no-preload-572602" server: "https://192.168.39.121:8443"
	I0416 00:59:58.683135   61500 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 00:59:58.695291   61500 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.121
	I0416 00:59:58.695323   61500 kubeadm.go:1154] stopping kube-system containers ...
	I0416 00:59:58.695337   61500 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 00:59:58.695380   61500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:59:58.731743   61500 cri.go:89] found id: ""
	I0416 00:59:58.731832   61500 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 00:59:58.748125   61500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 00:59:58.757845   61500 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 00:59:58.757865   61500 kubeadm.go:156] found existing configuration files:
	
	I0416 00:59:58.757918   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 00:59:58.766993   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 00:59:58.767036   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 00:59:58.776831   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 00:59:58.786420   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 00:59:58.786467   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 00:59:58.796067   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 00:59:58.805385   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 00:59:58.805511   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 00:59:58.815313   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 00:59:58.826551   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 00:59:58.826603   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 00:59:58.836652   61500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 00:59:58.848671   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 00:59:58.967511   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.416009   61500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.44846758s)
	I0416 01:00:00.416041   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.657784   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.741694   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.876550   61500 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:00.876630   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:01.377586   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:01.877647   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:01.950167   61500 api_server.go:72] duration metric: took 1.073614574s to wait for apiserver process to appear ...
	I0416 01:00:01.950201   61500 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:00:01.950224   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:01.950854   61500 api_server.go:269] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
	I0416 01:00:02.450437   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 00:59:59.421878   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Start
	I0416 00:59:59.422036   62747 main.go:141] libmachine: (embed-certs-617092) Ensuring networks are active...
	I0416 00:59:59.422646   62747 main.go:141] libmachine: (embed-certs-617092) Ensuring network default is active
	I0416 00:59:59.422931   62747 main.go:141] libmachine: (embed-certs-617092) Ensuring network mk-embed-certs-617092 is active
	I0416 00:59:59.423360   62747 main.go:141] libmachine: (embed-certs-617092) Getting domain xml...
	I0416 00:59:59.424005   62747 main.go:141] libmachine: (embed-certs-617092) Creating domain...
	I0416 01:00:00.682582   62747 main.go:141] libmachine: (embed-certs-617092) Waiting to get IP...
	I0416 01:00:00.683684   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:00.684222   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:00.684277   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:00.684198   63257 retry.go:31] will retry after 196.582767ms: waiting for machine to come up
	I0416 01:00:00.882954   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:00.883544   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:00.883577   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:00.883482   63257 retry.go:31] will retry after 309.274692ms: waiting for machine to come up
	I0416 01:00:01.193848   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:01.194286   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:01.194325   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:01.194234   63257 retry.go:31] will retry after 379.332728ms: waiting for machine to come up
	I0416 01:00:01.574938   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:01.575371   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:01.575400   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:01.575318   63257 retry.go:31] will retry after 445.10423ms: waiting for machine to come up
	I0416 01:00:02.022081   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:02.022612   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:02.022636   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:02.022570   63257 retry.go:31] will retry after 692.025501ms: waiting for machine to come up
	I0416 01:00:02.716548   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:02.717032   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:02.717061   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:02.716992   63257 retry.go:31] will retry after 735.44304ms: waiting for machine to come up
	I0416 01:00:02.891638   62139 crio.go:462] duration metric: took 1.823700483s to copy over tarball
	I0416 01:00:02.891723   62139 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:00:06.137253   62139 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.245498092s)
	I0416 01:00:06.137283   62139 crio.go:469] duration metric: took 3.245614896s to extract the tarball
	I0416 01:00:06.137292   62139 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:00:06.181260   62139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:06.224646   62139 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 01:00:06.224682   62139 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 01:00:06.224762   62139 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:06.224815   62139 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.224851   62139 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.224821   62139 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.224768   62139 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.224797   62139 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.225121   62139 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0416 01:00:06.224797   62139 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.226485   62139 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.226505   62139 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0416 01:00:06.226516   62139 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.226580   62139 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.226729   62139 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.227296   62139 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:06.227311   62139 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.227315   62139 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.397101   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.431142   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0416 01:00:06.433152   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.433876   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.434844   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.441478   62139 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0416 01:00:06.441524   62139 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.441558   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.450391   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.506375   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.540080   62139 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0416 01:00:06.540250   62139 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0416 01:00:06.540121   62139 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0416 01:00:06.540299   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.540305   62139 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.540343   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613287   62139 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0416 01:00:06.613305   62139 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0416 01:00:06.613334   62139 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.613339   62139 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.613381   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613381   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613490   62139 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0416 01:00:06.613522   62139 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.613569   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613384   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.613620   62139 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0416 01:00:06.613657   62139 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.613716   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0416 01:00:06.613722   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613665   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.619153   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.638065   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.734018   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0416 01:00:06.734134   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.749273   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0416 01:00:06.750536   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0416 01:00:06.750576   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.750655   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0416 01:00:06.750594   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0416 01:00:06.790321   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0416 01:00:06.803564   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0416 01:00:07.060494   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:05.541219   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:05.541261   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:05.541279   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:05.585252   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:05.585284   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:05.950871   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:05.970682   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:05.970725   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:06.450780   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:06.457855   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:06.457888   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:06.950519   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:06.955476   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:06.955505   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:07.451155   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:07.463138   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:07.463172   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:03.453566   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:03.454098   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:03.454131   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:03.454033   63257 retry.go:31] will retry after 838.732671ms: waiting for machine to come up
	I0416 01:00:04.294692   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:04.295209   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:04.295237   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:04.295158   63257 retry.go:31] will retry after 1.302969512s: waiting for machine to come up
	I0416 01:00:05.599886   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:05.600406   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:05.600435   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:05.600378   63257 retry.go:31] will retry after 1.199501225s: waiting for machine to come up
	I0416 01:00:06.801741   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:06.802134   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:06.802153   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:06.802107   63257 retry.go:31] will retry after 1.631018672s: waiting for machine to come up
	I0416 01:00:07.951263   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:07.961911   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:07.961946   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:08.450413   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:08.458651   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:08.458683   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:08.950297   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:08.955847   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 200:
	ok
	I0416 01:00:08.964393   61500 api_server.go:141] control plane version: v1.30.0-rc.2
	I0416 01:00:08.964422   61500 api_server.go:131] duration metric: took 7.01421218s to wait for apiserver health ...
	I0416 01:00:08.964432   61500 cni.go:84] Creating CNI manager for ""
	I0416 01:00:08.964445   61500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:08.966249   61500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:00:07.207951   62139 cache_images.go:92] duration metric: took 983.249797ms to LoadCachedImages
	W0416 01:00:07.286619   62139 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0416 01:00:07.286654   62139 kubeadm.go:928] updating node { 192.168.83.98 8443 v1.20.0 crio true true} ...
	I0416 01:00:07.286815   62139 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-800769 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.98
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 01:00:07.286916   62139 ssh_runner.go:195] Run: crio config
	I0416 01:00:07.338016   62139 cni.go:84] Creating CNI manager for ""
	I0416 01:00:07.338038   62139 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:07.338049   62139 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:00:07.338072   62139 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.98 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-800769 NodeName:old-k8s-version-800769 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.98"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.98 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0416 01:00:07.338207   62139 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.98
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-800769"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.98
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.98"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:00:07.338273   62139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0416 01:00:07.349347   62139 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:00:07.349432   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:00:07.361389   62139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0416 01:00:07.379714   62139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:00:07.397953   62139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0416 01:00:07.416901   62139 ssh_runner.go:195] Run: grep 192.168.83.98	control-plane.minikube.internal$ /etc/hosts
	I0416 01:00:07.420904   62139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.98	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:07.436685   62139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:07.567945   62139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:00:07.587829   62139 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769 for IP: 192.168.83.98
	I0416 01:00:07.587858   62139 certs.go:194] generating shared ca certs ...
	I0416 01:00:07.587880   62139 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:07.588087   62139 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:00:07.588155   62139 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:00:07.588171   62139 certs.go:256] generating profile certs ...
	I0416 01:00:07.606683   62139 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.key
	I0416 01:00:07.606823   62139 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key.efc35655
	I0416 01:00:07.606872   62139 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.key
	I0416 01:00:07.607040   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:00:07.607087   62139 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:00:07.607114   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:00:07.607172   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:00:07.607204   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:00:07.607234   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:00:07.607283   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:07.608127   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:00:07.658868   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:00:07.703378   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:00:07.743203   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:00:07.787335   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0416 01:00:07.823630   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 01:00:07.854198   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:00:07.881813   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 01:00:07.909698   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:00:07.935341   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:00:07.963102   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:00:07.989657   62139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:00:08.009203   62139 ssh_runner.go:195] Run: openssl version
	I0416 01:00:08.015677   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:00:08.027077   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.032096   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.032179   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.038672   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:00:08.054256   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:00:08.065287   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.069846   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.069907   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.075899   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:00:08.087272   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:00:08.098494   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.103168   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.103246   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.109202   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:00:08.120143   62139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:00:08.125027   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 01:00:08.131716   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 01:00:08.138024   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 01:00:08.144291   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 01:00:08.150741   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 01:00:08.156931   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 01:00:08.163147   62139 kubeadm.go:391] StartCluster: {Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.98 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:00:08.163254   62139 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:00:08.163298   62139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:08.201923   62139 cri.go:89] found id: ""
	I0416 01:00:08.202000   62139 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 01:00:08.212441   62139 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 01:00:08.212462   62139 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 01:00:08.212467   62139 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 01:00:08.212514   62139 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 01:00:08.222702   62139 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 01:00:08.223670   62139 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-800769" does not appear in /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:00:08.224332   62139 kubeconfig.go:62] /home/jenkins/minikube-integration/18647-7542/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-800769" cluster setting kubeconfig missing "old-k8s-version-800769" context setting]
	I0416 01:00:08.225340   62139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:08.343775   62139 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 01:00:08.355942   62139 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.98
	I0416 01:00:08.355986   62139 kubeadm.go:1154] stopping kube-system containers ...
	I0416 01:00:08.356007   62139 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 01:00:08.356081   62139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:08.398894   62139 cri.go:89] found id: ""
	I0416 01:00:08.398976   62139 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 01:00:08.416343   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:00:08.426901   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:00:08.426926   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:00:08.426981   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:00:08.437870   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:00:08.437942   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:00:08.452256   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:00:08.466375   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:00:08.466447   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:00:08.477246   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:00:08.487547   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:00:08.487615   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:00:08.504171   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:00:08.515265   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:00:08.515332   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:00:08.525186   62139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:00:08.535381   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:08.657456   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.504421   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.781478   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.950913   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:10.044772   62139 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:10.044871   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:10.545002   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:11.045664   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:11.545083   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:12.045593   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:08.967643   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:00:08.986743   61500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:00:09.011229   61500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:00:09.022810   61500 system_pods.go:59] 8 kube-system pods found
	I0416 01:00:09.022858   61500 system_pods.go:61] "coredns-7db6d8ff4d-xxlkb" [b1ec79ef-e16c-4feb-94ec-5dc85645867f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:00:09.022869   61500 system_pods.go:61] "etcd-no-preload-572602" [f29f3efe-bee4-4d8c-9d49-68008ad50a9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 01:00:09.022881   61500 system_pods.go:61] "kube-apiserver-no-preload-572602" [dd740f94-bfd5-4043-9522-5b8a932690cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 01:00:09.022893   61500 system_pods.go:61] "kube-controller-manager-no-preload-572602" [2778e1a7-a7e3-4ad6-a265-552e78b6b195] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 01:00:09.022901   61500 system_pods.go:61] "kube-proxy-v9fmp" [70ab6236-c758-48eb-85a7-8f7721730a20] Running
	I0416 01:00:09.022908   61500 system_pods.go:61] "kube-scheduler-no-preload-572602" [bb8650bb-657e-49f1-9cee-4437879be44d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 01:00:09.022919   61500 system_pods.go:61] "metrics-server-569cc877fc-llsfr" [ad421803-6236-44df-a15d-c890a3a10dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:00:09.022925   61500 system_pods.go:61] "storage-provisioner" [ec2dd6e2-33db-4888-8945-9879821c92fc] Running
	I0416 01:00:09.022934   61500 system_pods.go:74] duration metric: took 11.661356ms to wait for pod list to return data ...
	I0416 01:00:09.022950   61500 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:00:09.027411   61500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:00:09.027445   61500 node_conditions.go:123] node cpu capacity is 2
	I0416 01:00:09.027459   61500 node_conditions.go:105] duration metric: took 4.503043ms to run NodePressure ...
	I0416 01:00:09.027480   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.307796   61500 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 01:00:09.313534   61500 kubeadm.go:733] kubelet initialised
	I0416 01:00:09.313567   61500 kubeadm.go:734] duration metric: took 5.734401ms waiting for restarted kubelet to initialise ...
	I0416 01:00:09.313580   61500 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:00:09.320900   61500 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.327569   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.327606   61500 pod_ready.go:81] duration metric: took 6.67541ms for pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.327621   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.327633   61500 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.333714   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "etcd-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.333746   61500 pod_ready.go:81] duration metric: took 6.094825ms for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.333759   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "etcd-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.333768   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.338980   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "kube-apiserver-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.339006   61500 pod_ready.go:81] duration metric: took 5.230122ms for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.339017   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "kube-apiserver-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.339033   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.415418   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.415450   61500 pod_ready.go:81] duration metric: took 76.40508ms for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.415462   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.415470   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v9fmp" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.815907   61500 pod_ready.go:92] pod "kube-proxy-v9fmp" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:09.815945   61500 pod_ready.go:81] duration metric: took 400.462786ms for pod "kube-proxy-v9fmp" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.815959   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:11.824269   61500 pod_ready.go:102] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:08.434523   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:08.435039   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:08.435067   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:08.434988   63257 retry.go:31] will retry after 2.819136125s: waiting for machine to come up
	I0416 01:00:11.256238   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:11.256704   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:11.256722   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:11.256664   63257 retry.go:31] will retry after 3.074881299s: waiting for machine to come up
	I0416 01:00:12.545696   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:13.045935   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:13.545810   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.045682   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.545524   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:15.045110   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:15.545792   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:16.045843   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:16.545684   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:17.045401   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.322436   61500 pod_ready.go:102] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:16.821648   61500 pod_ready.go:102] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:14.335004   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:14.335391   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:14.335437   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:14.335343   63257 retry.go:31] will retry after 4.248377683s: waiting for machine to come up
	I0416 01:00:20.014452   61267 start.go:364] duration metric: took 53.932663013s to acquireMachinesLock for "default-k8s-diff-port-653942"
	I0416 01:00:20.014507   61267 start.go:96] Skipping create...Using existing machine configuration
	I0416 01:00:20.014515   61267 fix.go:54] fixHost starting: 
	I0416 01:00:20.014929   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:00:20.014964   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:00:20.033099   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0416 01:00:20.033554   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:00:20.034077   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:00:20.034104   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:00:20.034458   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:00:20.034665   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:20.034812   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:00:20.036559   61267 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653942: state=Stopped err=<nil>
	I0416 01:00:20.036588   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	W0416 01:00:20.036751   61267 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 01:00:20.038774   61267 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-653942" ...
	I0416 01:00:18.588875   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.589320   62747 main.go:141] libmachine: (embed-certs-617092) Found IP for machine: 192.168.61.225
	I0416 01:00:18.589347   62747 main.go:141] libmachine: (embed-certs-617092) Reserving static IP address...
	I0416 01:00:18.589362   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has current primary IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.589699   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "embed-certs-617092", mac: "52:54:00:86:1b:62", ip: "192.168.61.225"} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.589728   62747 main.go:141] libmachine: (embed-certs-617092) Reserved static IP address: 192.168.61.225
	I0416 01:00:18.589752   62747 main.go:141] libmachine: (embed-certs-617092) DBG | skip adding static IP to network mk-embed-certs-617092 - found existing host DHCP lease matching {name: "embed-certs-617092", mac: "52:54:00:86:1b:62", ip: "192.168.61.225"}
	I0416 01:00:18.589771   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Getting to WaitForSSH function...
	I0416 01:00:18.589808   62747 main.go:141] libmachine: (embed-certs-617092) Waiting for SSH to be available...
	I0416 01:00:18.591590   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.591858   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.591885   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.591995   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Using SSH client type: external
	I0416 01:00:18.592027   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa (-rw-------)
	I0416 01:00:18.592058   62747 main.go:141] libmachine: (embed-certs-617092) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 01:00:18.592072   62747 main.go:141] libmachine: (embed-certs-617092) DBG | About to run SSH command:
	I0416 01:00:18.592084   62747 main.go:141] libmachine: (embed-certs-617092) DBG | exit 0
	I0416 01:00:18.717336   62747 main.go:141] libmachine: (embed-certs-617092) DBG | SSH cmd err, output: <nil>: 
	I0416 01:00:18.717759   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetConfigRaw
	I0416 01:00:18.718347   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:18.720640   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.721040   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.721086   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.721300   62747 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/config.json ...
	I0416 01:00:18.721481   62747 machine.go:94] provisionDockerMachine start ...
	I0416 01:00:18.721501   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:18.721700   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:18.723610   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.723924   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.723946   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.724126   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:18.724345   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.724512   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.724616   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:18.724737   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:18.725049   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:18.725199   62747 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 01:00:18.834014   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 01:00:18.834041   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetMachineName
	I0416 01:00:18.834257   62747 buildroot.go:166] provisioning hostname "embed-certs-617092"
	I0416 01:00:18.834280   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetMachineName
	I0416 01:00:18.834495   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:18.836959   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.837282   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.837333   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.837417   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:18.837588   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.837755   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.837962   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:18.838152   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:18.838324   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:18.838342   62747 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-617092 && echo "embed-certs-617092" | sudo tee /etc/hostname
	I0416 01:00:18.959828   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-617092
	
	I0416 01:00:18.959865   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:18.962661   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.962997   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.963029   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.963174   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:18.963351   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.963488   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.963609   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:18.963747   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:18.963949   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:18.963967   62747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-617092' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-617092/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-617092' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 01:00:19.079309   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 01:00:19.079341   62747 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 01:00:19.079400   62747 buildroot.go:174] setting up certificates
	I0416 01:00:19.079409   62747 provision.go:84] configureAuth start
	I0416 01:00:19.079423   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetMachineName
	I0416 01:00:19.079723   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:19.082430   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.082809   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.082838   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.082994   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.085476   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.085802   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.085825   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.085952   62747 provision.go:143] copyHostCerts
	I0416 01:00:19.086006   62747 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 01:00:19.086022   62747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 01:00:19.086077   62747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 01:00:19.086165   62747 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 01:00:19.086174   62747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 01:00:19.086193   62747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 01:00:19.086244   62747 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 01:00:19.086251   62747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 01:00:19.086270   62747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 01:00:19.086336   62747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.embed-certs-617092 san=[127.0.0.1 192.168.61.225 embed-certs-617092 localhost minikube]
	I0416 01:00:19.330622   62747 provision.go:177] copyRemoteCerts
	I0416 01:00:19.330687   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 01:00:19.330712   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.333264   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.333618   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.333645   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.333798   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.333979   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.334122   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.334235   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:19.415820   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0416 01:00:19.442985   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 01:00:19.468427   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 01:00:19.496640   62747 provision.go:87] duration metric: took 417.215523ms to configureAuth
	I0416 01:00:19.496676   62747 buildroot.go:189] setting minikube options for container-runtime
	I0416 01:00:19.496857   62747 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:00:19.496929   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.499561   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.499933   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.499981   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.500132   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.500352   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.500529   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.500671   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.500823   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:19.501026   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:19.501046   62747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 01:00:19.775400   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 01:00:19.775434   62747 machine.go:97] duration metric: took 1.053938445s to provisionDockerMachine
	I0416 01:00:19.775448   62747 start.go:293] postStartSetup for "embed-certs-617092" (driver="kvm2")
	I0416 01:00:19.775462   62747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 01:00:19.775484   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:19.775853   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 01:00:19.775886   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.778961   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.779327   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.779356   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.779510   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.779723   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.779883   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.780008   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:19.865236   62747 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 01:00:19.869769   62747 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 01:00:19.869800   62747 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 01:00:19.869865   62747 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 01:00:19.870010   62747 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 01:00:19.870111   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 01:00:19.880477   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:19.905555   62747 start.go:296] duration metric: took 130.091868ms for postStartSetup
	I0416 01:00:19.905603   62747 fix.go:56] duration metric: took 20.511199999s for fixHost
	I0416 01:00:19.905629   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.908252   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.908593   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.908631   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.908770   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.908972   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.909129   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.909284   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.909448   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:19.909607   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:19.909622   62747 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 01:00:20.014222   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229219.981820926
	
	I0416 01:00:20.014251   62747 fix.go:216] guest clock: 1713229219.981820926
	I0416 01:00:20.014262   62747 fix.go:229] Guest: 2024-04-16 01:00:19.981820926 +0000 UTC Remote: 2024-04-16 01:00:19.90560817 +0000 UTC m=+97.152894999 (delta=76.212756ms)
	I0416 01:00:20.014331   62747 fix.go:200] guest clock delta is within tolerance: 76.212756ms
	I0416 01:00:20.014339   62747 start.go:83] releasing machines lock for "embed-certs-617092", held for 20.619971021s
	I0416 01:00:20.014377   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.014676   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:20.017771   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.018204   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:20.018236   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.018446   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.018991   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.019172   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.019260   62747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 01:00:20.019299   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:20.019439   62747 ssh_runner.go:195] Run: cat /version.json
	I0416 01:00:20.019466   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:20.022283   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.022554   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.022664   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:20.022688   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.022897   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:20.023088   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:20.023150   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:20.023177   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.023281   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:20.023431   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:20.023431   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:20.023791   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:20.023942   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:20.024084   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:20.138251   62747 ssh_runner.go:195] Run: systemctl --version
	I0416 01:00:20.145100   62747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 01:00:20.299049   62747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 01:00:20.307080   62747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 01:00:20.307177   62747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 01:00:20.326056   62747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 01:00:20.326085   62747 start.go:494] detecting cgroup driver to use...
	I0416 01:00:20.326166   62747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 01:00:20.343297   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 01:00:20.358136   62747 docker.go:217] disabling cri-docker service (if available) ...
	I0416 01:00:20.358201   62747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 01:00:20.372936   62747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 01:00:20.387473   62747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 01:00:20.515721   62747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:00:20.680319   62747 docker.go:233] disabling docker service ...
	I0416 01:00:20.680413   62747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:00:20.700816   62747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:00:20.724097   62747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:00:20.885812   62747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:00:21.037890   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:00:21.055670   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:00:21.078466   62747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 01:00:21.078533   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.090135   62747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:00:21.090200   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.106122   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.123844   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.134923   62747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:00:21.153565   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.164751   62747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.184880   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.197711   62747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:00:21.208615   62747 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:00:21.208669   62747 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:00:21.223906   62747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:00:21.234873   62747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:21.405921   62747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:00:21.564833   62747 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:00:21.564918   62747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:00:21.570592   62747 start.go:562] Will wait 60s for crictl version
	I0416 01:00:21.570660   62747 ssh_runner.go:195] Run: which crictl
	I0416 01:00:21.575339   62747 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:00:21.617252   62747 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:00:21.617348   62747 ssh_runner.go:195] Run: crio --version
	I0416 01:00:21.648662   62747 ssh_runner.go:195] Run: crio --version
	I0416 01:00:21.683775   62747 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 01:00:17.544937   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:18.045282   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:18.545707   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:19.045821   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:19.545868   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.045069   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.545134   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:21.045607   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:21.545366   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:22.044998   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.040137   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Start
	I0416 01:00:20.040355   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Ensuring networks are active...
	I0416 01:00:20.041103   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Ensuring network default is active
	I0416 01:00:20.041469   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Ensuring network mk-default-k8s-diff-port-653942 is active
	I0416 01:00:20.041869   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Getting domain xml...
	I0416 01:00:20.042474   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Creating domain...
	I0416 01:00:21.359375   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting to get IP...
	I0416 01:00:21.360333   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.360736   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.360807   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:21.360726   63461 retry.go:31] will retry after 290.970715ms: waiting for machine to come up
	I0416 01:00:21.653420   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.653883   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.653916   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:21.653841   63461 retry.go:31] will retry after 361.304618ms: waiting for machine to come up
	I0416 01:00:22.016540   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.017038   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.017071   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:22.016976   63461 retry.go:31] will retry after 411.249327ms: waiting for machine to come up
	I0416 01:00:18.322778   61500 pod_ready.go:92] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:18.322799   61500 pod_ready.go:81] duration metric: took 8.506833323s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:18.322808   61500 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:20.328344   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:22.331157   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:21.685033   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:21.688407   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:21.688774   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:21.688809   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:21.689010   62747 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0416 01:00:21.693612   62747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:21.707524   62747 kubeadm.go:877] updating cluster {Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:00:21.707657   62747 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 01:00:21.707699   62747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:21.748697   62747 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 01:00:21.748785   62747 ssh_runner.go:195] Run: which lz4
	I0416 01:00:21.753521   62747 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:00:21.758125   62747 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:00:21.758158   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 01:00:22.545403   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:23.045303   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:23.544984   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:24.045882   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:24.545194   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:25.045010   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:25.545278   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:26.045702   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:26.545233   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:27.045814   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:22.429595   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.430124   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.430159   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:22.430087   63461 retry.go:31] will retry after 495.681984ms: waiting for machine to come up
	I0416 01:00:22.927476   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.927932   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.927959   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:22.927875   63461 retry.go:31] will retry after 506.264557ms: waiting for machine to come up
	I0416 01:00:23.435290   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:23.435742   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:23.435773   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:23.435689   63461 retry.go:31] will retry after 826.359716ms: waiting for machine to come up
	I0416 01:00:24.263672   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:24.264151   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:24.264183   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:24.264107   63461 retry.go:31] will retry after 873.35176ms: waiting for machine to come up
	I0416 01:00:25.138864   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:25.139318   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:25.139340   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:25.139308   63461 retry.go:31] will retry after 1.129546887s: waiting for machine to come up
	I0416 01:00:26.270364   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:26.270968   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:26.271000   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:26.270902   63461 retry.go:31] will retry after 1.441466368s: waiting for machine to come up
	I0416 01:00:24.830562   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:26.832057   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:23.353811   62747 crio.go:462] duration metric: took 1.600325005s to copy over tarball
	I0416 01:00:23.353885   62747 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:00:25.815443   62747 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.46152973s)
	I0416 01:00:25.815479   62747 crio.go:469] duration metric: took 2.461639439s to extract the tarball
	I0416 01:00:25.815489   62747 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:00:25.862653   62747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:25.914416   62747 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 01:00:25.914444   62747 cache_images.go:84] Images are preloaded, skipping loading
	I0416 01:00:25.914454   62747 kubeadm.go:928] updating node { 192.168.61.225 8443 v1.29.3 crio true true} ...
	I0416 01:00:25.914586   62747 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-617092 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 01:00:25.914680   62747 ssh_runner.go:195] Run: crio config
	I0416 01:00:25.970736   62747 cni.go:84] Creating CNI manager for ""
	I0416 01:00:25.970760   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:25.970773   62747 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:00:25.970796   62747 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.225 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-617092 NodeName:embed-certs-617092 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 01:00:25.970949   62747 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-617092"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:00:25.971022   62747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 01:00:25.985111   62747 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:00:25.985198   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:00:25.996306   62747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0416 01:00:26.013401   62747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:00:26.030094   62747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0416 01:00:26.048252   62747 ssh_runner.go:195] Run: grep 192.168.61.225	control-plane.minikube.internal$ /etc/hosts
	I0416 01:00:26.052717   62747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:26.069538   62747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:26.205867   62747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:00:26.224210   62747 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092 for IP: 192.168.61.225
	I0416 01:00:26.224237   62747 certs.go:194] generating shared ca certs ...
	I0416 01:00:26.224259   62747 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:26.224459   62747 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:00:26.224520   62747 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:00:26.224532   62747 certs.go:256] generating profile certs ...
	I0416 01:00:26.224646   62747 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/client.key
	I0416 01:00:26.224723   62747 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/apiserver.key.383097d4
	I0416 01:00:26.224773   62747 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/proxy-client.key
	I0416 01:00:26.224932   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:00:26.224973   62747 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:00:26.224982   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:00:26.225014   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:00:26.225050   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:00:26.225085   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:00:26.225126   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:26.225872   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:00:26.282272   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:00:26.329827   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:00:26.366744   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:00:26.405845   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0416 01:00:26.440535   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 01:00:26.465371   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:00:26.491633   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 01:00:26.518682   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:00:26.543992   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:00:26.573728   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:00:26.602308   62747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:00:26.622491   62747 ssh_runner.go:195] Run: openssl version
	I0416 01:00:26.628805   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:00:26.643163   62747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:26.648292   62747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:26.648351   62747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:26.654890   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:00:26.668501   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:00:26.682038   62747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:00:26.687327   62747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:00:26.687388   62747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:00:26.693557   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:00:26.706161   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:00:26.718432   62747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:00:26.722989   62747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:00:26.723050   62747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:00:26.729311   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:00:26.744138   62747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:00:26.749490   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 01:00:26.756478   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 01:00:26.763326   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 01:00:26.770194   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 01:00:26.776641   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 01:00:26.783022   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 01:00:26.789543   62747 kubeadm.go:391] StartCluster: {Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:00:26.789654   62747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:00:26.789717   62747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:26.831148   62747 cri.go:89] found id: ""
	I0416 01:00:26.831219   62747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 01:00:26.844372   62747 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 01:00:26.844398   62747 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 01:00:26.844403   62747 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 01:00:26.844454   62747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 01:00:26.858173   62747 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 01:00:26.859210   62747 kubeconfig.go:125] found "embed-certs-617092" server: "https://192.168.61.225:8443"
	I0416 01:00:26.861233   62747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 01:00:26.874068   62747 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.225
	I0416 01:00:26.874105   62747 kubeadm.go:1154] stopping kube-system containers ...
	I0416 01:00:26.874119   62747 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 01:00:26.874177   62747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:26.926456   62747 cri.go:89] found id: ""
	I0416 01:00:26.926537   62747 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 01:00:26.945874   62747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:00:26.960207   62747 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:00:26.960229   62747 kubeadm.go:156] found existing configuration files:
	
	I0416 01:00:26.960282   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:00:26.971895   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:00:26.971958   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:00:26.982956   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:00:26.993935   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:00:26.994000   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:00:27.005216   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:00:27.015624   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:00:27.015680   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:00:27.026513   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:00:27.037062   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:00:27.037118   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:00:27.048173   62747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:00:27.061987   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:27.190243   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:27.545025   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:28.045752   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:28.545833   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.045264   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.545316   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:30.045594   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:30.545046   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:31.045139   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:31.545251   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:32.045710   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:27.714372   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:27.714822   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:27.714854   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:27.714767   63461 retry.go:31] will retry after 1.810511131s: waiting for machine to come up
	I0416 01:00:29.527497   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:29.528041   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:29.528072   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:29.527983   63461 retry.go:31] will retry after 2.163921338s: waiting for machine to come up
	I0416 01:00:31.694203   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:31.694741   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:31.694769   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:31.694714   63461 retry.go:31] will retry after 2.245150923s: waiting for machine to come up
	I0416 01:00:29.332159   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:31.332218   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:28.252295   62747 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.062013928s)
	I0416 01:00:28.252331   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:28.468110   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:28.553370   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:28.676185   62747 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:28.676273   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.176826   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.676498   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.702138   62747 api_server.go:72] duration metric: took 1.025950998s to wait for apiserver process to appear ...
	I0416 01:00:29.702170   62747 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:00:29.702192   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:29.702822   62747 api_server.go:269] stopped: https://192.168.61.225:8443/healthz: Get "https://192.168.61.225:8443/healthz": dial tcp 192.168.61.225:8443: connect: connection refused
	I0416 01:00:30.203298   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:32.951714   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:32.951754   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:32.951779   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:33.003631   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:33.003672   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:33.202825   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:33.208168   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:33.208201   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:33.702532   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:33.712501   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:33.712542   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:34.203157   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:34.210567   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:34.210597   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:34.702568   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:34.711690   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 200:
	ok
	I0416 01:00:34.723252   62747 api_server.go:141] control plane version: v1.29.3
	I0416 01:00:34.723279   62747 api_server.go:131] duration metric: took 5.021102658s to wait for apiserver health ...
	I0416 01:00:34.723287   62747 cni.go:84] Creating CNI manager for ""
	I0416 01:00:34.723293   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:34.724989   62747 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:00:32.545963   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.045020   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.545657   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:34.045706   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:34.544972   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:35.045252   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:35.545087   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:36.045080   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:36.545787   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:37.045046   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.942412   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:33.942923   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:33.942952   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:33.942870   63461 retry.go:31] will retry after 3.750613392s: waiting for machine to come up
	I0416 01:00:33.829307   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:35.830613   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:34.726400   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:00:34.746294   62747 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:00:34.767028   62747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:00:34.778610   62747 system_pods.go:59] 8 kube-system pods found
	I0416 01:00:34.778653   62747 system_pods.go:61] "coredns-76f75df574-dxzhk" [a71b29ec-8602-47d6-825c-a1a54a1758d0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:00:34.778664   62747 system_pods.go:61] "etcd-embed-certs-617092" [8966501b-6a06-4e0b-acb6-77df5f53cd3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 01:00:34.778674   62747 system_pods.go:61] "kube-apiserver-embed-certs-617092" [7ad29687-3964-4a5b-8939-bcf3dc71d578] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 01:00:34.778685   62747 system_pods.go:61] "kube-controller-manager-embed-certs-617092" [78b21361-f302-43f3-8356-ea15fad4edb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 01:00:34.778695   62747 system_pods.go:61] "kube-proxy-xtdf4" [4e8fe1da-9a02-428e-94f1-595f2e9170e0] Running
	I0416 01:00:34.778703   62747 system_pods.go:61] "kube-scheduler-embed-certs-617092" [c03d87b4-26d3-4bff-8f53-8844260f1ed8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 01:00:34.778720   62747 system_pods.go:61] "metrics-server-57f55c9bc5-knnvn" [4607d12d-25db-4637-be17-e2665970c0a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:00:34.778729   62747 system_pods.go:61] "storage-provisioner" [41362b6c-fde7-45fa-b6cf-1d7acef3d4ce] Running
	I0416 01:00:34.778741   62747 system_pods.go:74] duration metric: took 11.690083ms to wait for pod list to return data ...
	I0416 01:00:34.778755   62747 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:00:34.782283   62747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:00:34.782319   62747 node_conditions.go:123] node cpu capacity is 2
	I0416 01:00:34.782329   62747 node_conditions.go:105] duration metric: took 3.566074ms to run NodePressure ...
	I0416 01:00:34.782344   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:35.056194   62747 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 01:00:35.068546   62747 kubeadm.go:733] kubelet initialised
	I0416 01:00:35.068571   62747 kubeadm.go:734] duration metric: took 12.345347ms waiting for restarted kubelet to initialise ...
	I0416 01:00:35.068581   62747 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:00:35.075013   62747 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-dxzhk" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:37.081976   62747 pod_ready.go:102] pod "coredns-76f75df574-dxzhk" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:37.697323   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.697830   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has current primary IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.697857   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Found IP for machine: 192.168.50.216
	I0416 01:00:37.697873   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Reserving static IP address...
	I0416 01:00:37.698323   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Reserved static IP address: 192.168.50.216
	I0416 01:00:37.698345   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for SSH to be available...
	I0416 01:00:37.698372   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-653942", mac: "52:54:00:4b:a2:47", ip: "192.168.50.216"} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.698418   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | skip adding static IP to network mk-default-k8s-diff-port-653942 - found existing host DHCP lease matching {name: "default-k8s-diff-port-653942", mac: "52:54:00:4b:a2:47", ip: "192.168.50.216"}
	I0416 01:00:37.698450   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Getting to WaitForSSH function...
	I0416 01:00:37.700942   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.701312   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.701346   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.701520   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Using SSH client type: external
	I0416 01:00:37.701567   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa (-rw-------)
	I0416 01:00:37.701621   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 01:00:37.701676   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | About to run SSH command:
	I0416 01:00:37.701712   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | exit 0
	I0416 01:00:37.829860   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | SSH cmd err, output: <nil>: 
	I0416 01:00:37.830254   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetConfigRaw
	I0416 01:00:37.830931   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:37.833361   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.833755   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.833788   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.834026   61267 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/config.json ...
	I0416 01:00:37.834198   61267 machine.go:94] provisionDockerMachine start ...
	I0416 01:00:37.834214   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:37.834426   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:37.836809   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.837221   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.837251   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.837377   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:37.837588   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.837737   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.837869   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:37.838023   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:37.838208   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:37.838219   61267 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 01:00:37.950999   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 01:00:37.951031   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 01:00:37.951271   61267 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653942"
	I0416 01:00:37.951303   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 01:00:37.951483   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:37.954395   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.954730   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.954755   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.954949   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:37.955165   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.955344   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.955549   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:37.955756   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:37.955980   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:37.956001   61267 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-653942 && echo "default-k8s-diff-port-653942" | sudo tee /etc/hostname
	I0416 01:00:38.085650   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-653942
	
	I0416 01:00:38.085682   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.088689   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.089031   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.089060   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.089297   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.089474   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.089623   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.089780   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.089948   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:38.090127   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:38.090146   61267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-653942' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-653942/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-653942' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 01:00:38.214653   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 01:00:38.214734   61267 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 01:00:38.214760   61267 buildroot.go:174] setting up certificates
	I0416 01:00:38.214773   61267 provision.go:84] configureAuth start
	I0416 01:00:38.214785   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 01:00:38.215043   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:38.217744   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.218145   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.218174   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.218336   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.220861   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.221187   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.221216   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.221343   61267 provision.go:143] copyHostCerts
	I0416 01:00:38.221405   61267 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 01:00:38.221426   61267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 01:00:38.221492   61267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 01:00:38.221638   61267 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 01:00:38.221649   61267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 01:00:38.221685   61267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 01:00:38.221777   61267 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 01:00:38.221787   61267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 01:00:38.221815   61267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 01:00:38.221887   61267 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-653942 san=[127.0.0.1 192.168.50.216 default-k8s-diff-port-653942 localhost minikube]
	I0416 01:00:38.266327   61267 provision.go:177] copyRemoteCerts
	I0416 01:00:38.266390   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 01:00:38.266422   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.269080   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.269546   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.269583   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.269901   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.270115   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.270259   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.270444   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:38.352861   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 01:00:38.380995   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0416 01:00:38.405746   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 01:00:38.431467   61267 provision.go:87] duration metric: took 216.680985ms to configureAuth
	I0416 01:00:38.431502   61267 buildroot.go:189] setting minikube options for container-runtime
	I0416 01:00:38.431674   61267 config.go:182] Loaded profile config "default-k8s-diff-port-653942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:00:38.431740   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.434444   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.434867   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.434909   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.435032   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.435245   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.435380   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.435568   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.435744   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:38.435948   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:38.435974   61267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 01:00:38.729392   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 01:00:38.729421   61267 machine.go:97] duration metric: took 895.211347ms to provisionDockerMachine
	I0416 01:00:38.729432   61267 start.go:293] postStartSetup for "default-k8s-diff-port-653942" (driver="kvm2")
	I0416 01:00:38.729442   61267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 01:00:38.729463   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.729802   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 01:00:38.729826   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.732755   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.733135   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.733181   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.733326   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.733490   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.733649   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.733784   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:38.819006   61267 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 01:00:38.823781   61267 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 01:00:38.823804   61267 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 01:00:38.823870   61267 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 01:00:38.823967   61267 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 01:00:38.824077   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 01:00:38.833958   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:38.859934   61267 start.go:296] duration metric: took 130.488205ms for postStartSetup
	I0416 01:00:38.859973   61267 fix.go:56] duration metric: took 18.845458863s for fixHost
	I0416 01:00:38.859992   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.862557   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.862889   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.862927   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.863016   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.863236   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.863426   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.863609   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.863786   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:38.863951   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:38.863961   61267 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 01:00:38.970405   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229238.936521840
	
	I0416 01:00:38.970431   61267 fix.go:216] guest clock: 1713229238.936521840
	I0416 01:00:38.970440   61267 fix.go:229] Guest: 2024-04-16 01:00:38.93652184 +0000 UTC Remote: 2024-04-16 01:00:38.859976379 +0000 UTC m=+356.490123424 (delta=76.545461ms)
	I0416 01:00:38.970489   61267 fix.go:200] guest clock delta is within tolerance: 76.545461ms
	I0416 01:00:38.970496   61267 start.go:83] releasing machines lock for "default-k8s-diff-port-653942", held for 18.956013216s
	I0416 01:00:38.970522   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.970806   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:38.973132   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.973440   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.973455   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.973646   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.974142   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.974332   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.974388   61267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 01:00:38.974432   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.974532   61267 ssh_runner.go:195] Run: cat /version.json
	I0416 01:00:38.974556   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.977284   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977459   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977624   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.977653   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977746   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.977774   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977800   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.978002   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.978017   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.978163   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.978169   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.978296   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:38.978314   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.978440   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:39.090827   61267 ssh_runner.go:195] Run: systemctl --version
	I0416 01:00:39.097716   61267 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 01:00:39.249324   61267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 01:00:39.256333   61267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 01:00:39.256402   61267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 01:00:39.272367   61267 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 01:00:39.272395   61267 start.go:494] detecting cgroup driver to use...
	I0416 01:00:39.272446   61267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 01:00:39.291713   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 01:00:39.305645   61267 docker.go:217] disabling cri-docker service (if available) ...
	I0416 01:00:39.305708   61267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 01:00:39.320731   61267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 01:00:39.336917   61267 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 01:00:39.450840   61267 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:00:39.596905   61267 docker.go:233] disabling docker service ...
	I0416 01:00:39.596972   61267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:00:39.612926   61267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:00:39.627583   61267 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:00:39.778135   61267 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:00:39.900216   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:00:39.914697   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:00:39.935875   61267 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 01:00:39.935930   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.946510   61267 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:00:39.946569   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.956794   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.966968   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.977207   61267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:00:39.988817   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:40.001088   61267 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:40.018950   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:40.030395   61267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:00:40.039956   61267 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:00:40.040013   61267 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:00:40.053877   61267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:00:40.065292   61267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:40.221527   61267 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:00:40.382800   61267 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:00:40.382880   61267 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:00:40.387842   61267 start.go:562] Will wait 60s for crictl version
	I0416 01:00:40.387897   61267 ssh_runner.go:195] Run: which crictl
	I0416 01:00:40.393774   61267 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:00:40.435784   61267 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:00:40.435864   61267 ssh_runner.go:195] Run: crio --version
	I0416 01:00:40.468702   61267 ssh_runner.go:195] Run: crio --version
	I0416 01:00:40.501355   61267 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 01:00:37.545192   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:38.045346   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:38.545599   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:39.045109   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:39.545360   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.045058   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.545745   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:41.045943   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:41.545900   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:42.045807   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.502716   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:40.505958   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:40.506353   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:40.506384   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:40.506597   61267 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0416 01:00:40.511238   61267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:40.525378   61267 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-653942 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-653942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.216 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:00:40.525519   61267 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 01:00:40.525586   61267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:40.570378   61267 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 01:00:40.570451   61267 ssh_runner.go:195] Run: which lz4
	I0416 01:00:40.575413   61267 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:00:40.580583   61267 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:00:40.580640   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 01:00:42.194745   61267 crio.go:462] duration metric: took 1.619375861s to copy over tarball
	I0416 01:00:42.194821   61267 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:00:37.830710   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:39.831822   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:42.330821   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:39.086761   62747 pod_ready.go:102] pod "coredns-76f75df574-dxzhk" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:40.082847   62747 pod_ready.go:92] pod "coredns-76f75df574-dxzhk" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:40.082868   62747 pod_ready.go:81] duration metric: took 5.007825454s for pod "coredns-76f75df574-dxzhk" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:40.082877   62747 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:42.092402   62747 pod_ready.go:92] pod "etcd-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:42.092425   62747 pod_ready.go:81] duration metric: took 2.009541778s for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:42.092438   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:42.545278   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:43.045894   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:43.545886   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.044964   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.544997   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:45.045340   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:45.545257   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:46.045108   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:46.544994   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:47.045987   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.671272   61267 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.476407392s)
	I0416 01:00:44.671304   61267 crio.go:469] duration metric: took 2.476532286s to extract the tarball
	I0416 01:00:44.671315   61267 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:00:44.709451   61267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:44.754382   61267 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 01:00:44.754412   61267 cache_images.go:84] Images are preloaded, skipping loading
	I0416 01:00:44.754424   61267 kubeadm.go:928] updating node { 192.168.50.216 8444 v1.29.3 crio true true} ...
	I0416 01:00:44.754543   61267 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-653942 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-653942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 01:00:44.754613   61267 ssh_runner.go:195] Run: crio config
	I0416 01:00:44.806896   61267 cni.go:84] Creating CNI manager for ""
	I0416 01:00:44.806918   61267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:44.806926   61267 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:00:44.806957   61267 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.216 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-653942 NodeName:default-k8s-diff-port-653942 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 01:00:44.807089   61267 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.216
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-653942"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.216
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.216"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:00:44.807144   61267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 01:00:44.821347   61267 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:00:44.821425   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:00:44.835415   61267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0416 01:00:44.855797   61267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:00:44.873694   61267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0416 01:00:44.892535   61267 ssh_runner.go:195] Run: grep 192.168.50.216	control-plane.minikube.internal$ /etc/hosts
	I0416 01:00:44.896538   61267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:44.909516   61267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:45.024588   61267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:00:45.055414   61267 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942 for IP: 192.168.50.216
	I0416 01:00:45.055440   61267 certs.go:194] generating shared ca certs ...
	I0416 01:00:45.055460   61267 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:45.055622   61267 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:00:45.055680   61267 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:00:45.055695   61267 certs.go:256] generating profile certs ...
	I0416 01:00:45.055815   61267 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.key
	I0416 01:00:45.055905   61267 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/apiserver.key.6620f6bf
	I0416 01:00:45.055975   61267 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/proxy-client.key
	I0416 01:00:45.056139   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:00:45.056185   61267 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:00:45.056195   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:00:45.056234   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:00:45.056268   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:00:45.056295   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:00:45.056355   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:45.057033   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:00:45.091704   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:00:45.154257   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:00:45.181077   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:00:45.222401   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0416 01:00:45.248568   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 01:00:45.277927   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:00:45.310417   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 01:00:45.341109   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:00:45.367056   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:00:45.395117   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:00:45.421921   61267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:00:45.440978   61267 ssh_runner.go:195] Run: openssl version
	I0416 01:00:45.447132   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:00:45.460008   61267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:00:45.464820   61267 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:00:45.464884   61267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:00:45.471232   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:00:45.482567   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:00:45.493541   61267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:45.498792   61267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:45.498849   61267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:45.505511   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:00:45.517533   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:00:45.529908   61267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:00:45.535120   61267 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:00:45.535181   61267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:00:45.541232   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:00:45.552946   61267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:00:45.559947   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 01:00:45.567567   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 01:00:45.575204   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 01:00:45.582057   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 01:00:45.588418   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 01:00:45.595517   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 01:00:45.602108   61267 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-653942 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-653942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.216 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:00:45.602213   61267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:00:45.602256   61267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:45.639538   61267 cri.go:89] found id: ""
	I0416 01:00:45.639621   61267 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 01:00:45.651216   61267 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 01:00:45.651245   61267 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 01:00:45.651252   61267 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 01:00:45.651307   61267 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 01:00:45.662522   61267 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 01:00:45.663697   61267 kubeconfig.go:125] found "default-k8s-diff-port-653942" server: "https://192.168.50.216:8444"
	I0416 01:00:45.666034   61267 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 01:00:45.675864   61267 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.216
	I0416 01:00:45.675900   61267 kubeadm.go:1154] stopping kube-system containers ...
	I0416 01:00:45.675927   61267 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 01:00:45.675992   61267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:45.718679   61267 cri.go:89] found id: ""
	I0416 01:00:45.718744   61267 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 01:00:45.737326   61267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:00:45.748122   61267 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:00:45.748146   61267 kubeadm.go:156] found existing configuration files:
	
	I0416 01:00:45.748200   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0416 01:00:45.758556   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:00:45.758618   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:00:45.769601   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0416 01:00:45.779361   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:00:45.779424   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:00:45.789283   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0416 01:00:45.798712   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:00:45.798805   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:00:45.808489   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0416 01:00:45.817400   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:00:45.817469   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:00:45.827902   61267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:00:45.838031   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:45.962948   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:46.862340   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:47.092144   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:47.170078   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:47.284634   61267 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:47.284719   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.830534   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:47.474148   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:44.100441   62747 pod_ready.go:102] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:47.472666   62747 pod_ready.go:102] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:47.599694   62747 pod_ready.go:92] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.599722   62747 pod_ready.go:81] duration metric: took 5.507276982s for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.599734   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.604479   62747 pod_ready.go:92] pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.604496   62747 pod_ready.go:81] duration metric: took 4.755735ms for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.604504   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xtdf4" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.608936   62747 pod_ready.go:92] pod "kube-proxy-xtdf4" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.608951   62747 pod_ready.go:81] duration metric: took 4.441482ms for pod "kube-proxy-xtdf4" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.608959   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.613108   62747 pod_ready.go:92] pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.613123   62747 pod_ready.go:81] duration metric: took 4.157722ms for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.613130   62747 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.545567   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.045898   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.545631   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:49.045678   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:49.545274   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:50.045281   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:50.545926   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:51.045076   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:51.545303   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:52.045271   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:47.785698   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.284828   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.315894   61267 api_server.go:72] duration metric: took 1.031258915s to wait for apiserver process to appear ...
	I0416 01:00:48.315925   61267 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:00:48.315950   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:51.781922   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:51.781957   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:51.781976   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:51.830460   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:51.830491   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:51.830505   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:51.858205   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:51.858240   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:52.316376   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:52.332667   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:52.332700   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:49.829236   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:52.329805   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:49.620626   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:51.620730   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:52.816565   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:52.827158   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:52.827191   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:53.316864   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:53.321112   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 200:
	ok
	I0416 01:00:53.329289   61267 api_server.go:141] control plane version: v1.29.3
	I0416 01:00:53.329320   61267 api_server.go:131] duration metric: took 5.013387579s to wait for apiserver health ...
	I0416 01:00:53.329331   61267 cni.go:84] Creating CNI manager for ""
	I0416 01:00:53.329340   61267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:53.331125   61267 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:00:52.545407   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.044961   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.545290   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:54.044994   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:54.545292   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:55.045285   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:55.545909   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:56.045029   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:56.545343   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:57.044988   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.332626   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:00:53.366364   61267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:00:53.401881   61267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:00:53.413478   61267 system_pods.go:59] 8 kube-system pods found
	I0416 01:00:53.413512   61267 system_pods.go:61] "coredns-76f75df574-cvlpq" [c200d470-26dd-40ea-a79b-29d9104122bb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:00:53.413527   61267 system_pods.go:61] "etcd-default-k8s-diff-port-653942" [24e85fc2-fb57-4ef6-9817-846207109e61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 01:00:53.413537   61267 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653942" [bd473e94-72a6-4391-b787-49e16e8a213f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 01:00:53.413547   61267 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653942" [31ed7183-a12b-422c-9e67-bba91147347a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 01:00:53.413555   61267 system_pods.go:61] "kube-proxy-6q9k7" [ba6d9cf9-37a5-4e01-9489-ce7395fd2a38] Running
	I0416 01:00:53.413563   61267 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653942" [4b481275-4ded-4251-963f-910954f10d15] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 01:00:53.413579   61267 system_pods.go:61] "metrics-server-57f55c9bc5-9cnv2" [24905ded-5bf8-4b34-8069-2e65c5ad8f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:00:53.413592   61267 system_pods.go:61] "storage-provisioner" [16ba28d0-2031-4c21-9c22-1b9289517449] Running
	I0416 01:00:53.413601   61267 system_pods.go:74] duration metric: took 11.695334ms to wait for pod list to return data ...
	I0416 01:00:53.413613   61267 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:00:53.417579   61267 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:00:53.417609   61267 node_conditions.go:123] node cpu capacity is 2
	I0416 01:00:53.417623   61267 node_conditions.go:105] duration metric: took 4.002735ms to run NodePressure ...
	I0416 01:00:53.417642   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:53.688389   61267 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 01:00:53.692755   61267 kubeadm.go:733] kubelet initialised
	I0416 01:00:53.692777   61267 kubeadm.go:734] duration metric: took 4.359298ms waiting for restarted kubelet to initialise ...
	I0416 01:00:53.692784   61267 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:00:53.698521   61267 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-cvlpq" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.704496   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "coredns-76f75df574-cvlpq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.704532   61267 pod_ready.go:81] duration metric: took 5.98382ms for pod "coredns-76f75df574-cvlpq" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.704543   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "coredns-76f75df574-cvlpq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.704550   61267 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.713110   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.713144   61267 pod_ready.go:81] duration metric: took 8.58568ms for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.713188   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.713201   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.718190   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.718210   61267 pod_ready.go:81] duration metric: took 4.997527ms for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.718219   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.718224   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.805697   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.805727   61267 pod_ready.go:81] duration metric: took 87.493805ms for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.805738   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.805743   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6q9k7" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:54.205884   61267 pod_ready.go:92] pod "kube-proxy-6q9k7" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:54.205911   61267 pod_ready.go:81] duration metric: took 400.161115ms for pod "kube-proxy-6q9k7" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:54.205921   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:56.213276   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:54.829391   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:57.330218   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:54.119995   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:56.121220   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:57.545333   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.045305   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.545871   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:59.045432   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:59.545000   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:00.045001   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:00.545855   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:01.045812   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:01.545477   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:02.045635   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.215064   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:00.215192   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:59.330599   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:01.831017   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:58.620594   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:01.120516   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:02.545690   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:03.045754   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:03.544965   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:04.045062   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:04.545196   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:05.045986   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:05.545246   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:06.045853   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:06.545863   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:07.045209   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:02.712971   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:04.713437   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:07.212886   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:04.328673   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:06.329726   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:03.124343   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:05.619912   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:07.622044   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:07.544952   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:08.045290   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:08.545296   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:09.045795   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:09.545932   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:10.045124   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:10.045209   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:10.087200   62139 cri.go:89] found id: ""
	I0416 01:01:10.087229   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.087237   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:10.087243   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:10.087300   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:10.126194   62139 cri.go:89] found id: ""
	I0416 01:01:10.126218   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.126225   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:10.126230   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:10.126275   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:10.165238   62139 cri.go:89] found id: ""
	I0416 01:01:10.165271   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.165282   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:10.165290   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:10.165357   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:10.202896   62139 cri.go:89] found id: ""
	I0416 01:01:10.202934   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.202945   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:10.202952   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:10.203015   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:10.243576   62139 cri.go:89] found id: ""
	I0416 01:01:10.243605   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.243613   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:10.243619   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:10.243667   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:10.278637   62139 cri.go:89] found id: ""
	I0416 01:01:10.278661   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.278669   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:10.278674   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:10.278726   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:10.316811   62139 cri.go:89] found id: ""
	I0416 01:01:10.316844   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.316852   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:10.316857   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:10.316914   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:10.359934   62139 cri.go:89] found id: ""
	I0416 01:01:10.359960   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.359967   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:10.359975   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:10.359987   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:10.413082   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:10.413119   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:10.428605   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:10.428632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:10.552536   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:10.552561   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:10.552578   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:10.615054   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:10.615091   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:08.213557   61267 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:01:08.213584   61267 pod_ready.go:81] duration metric: took 14.007657025s for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:01:08.213594   61267 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace to be "Ready" ...
	I0416 01:01:10.224984   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:08.831515   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:11.330529   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:10.122213   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:12.621939   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:13.160749   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:13.178449   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:13.178505   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:13.224192   62139 cri.go:89] found id: ""
	I0416 01:01:13.224215   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.224222   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:13.224228   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:13.224287   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:13.261441   62139 cri.go:89] found id: ""
	I0416 01:01:13.261469   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.261476   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:13.261481   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:13.261545   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:13.296602   62139 cri.go:89] found id: ""
	I0416 01:01:13.296636   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.296647   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:13.296654   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:13.296720   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:13.333944   62139 cri.go:89] found id: ""
	I0416 01:01:13.333968   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.333977   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:13.333984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:13.334049   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:13.372919   62139 cri.go:89] found id: ""
	I0416 01:01:13.372944   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.372957   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:13.372965   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:13.373022   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:13.413257   62139 cri.go:89] found id: ""
	I0416 01:01:13.413287   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.413299   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:13.413306   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:13.413373   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:13.451705   62139 cri.go:89] found id: ""
	I0416 01:01:13.451737   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.451748   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:13.451755   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:13.451836   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:13.492549   62139 cri.go:89] found id: ""
	I0416 01:01:13.492576   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.492586   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:13.492597   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:13.492613   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:13.547267   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:13.547303   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:13.568975   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:13.569002   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:13.674444   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:13.674469   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:13.674482   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:13.745111   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:13.745145   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:16.286955   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:16.301151   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:16.301257   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:16.337516   62139 cri.go:89] found id: ""
	I0416 01:01:16.337544   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.337554   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:16.337561   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:16.337623   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:16.372674   62139 cri.go:89] found id: ""
	I0416 01:01:16.372702   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.372712   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:16.372720   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:16.372783   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:16.411181   62139 cri.go:89] found id: ""
	I0416 01:01:16.411208   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.411224   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:16.411230   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:16.411283   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:16.449063   62139 cri.go:89] found id: ""
	I0416 01:01:16.449102   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.449109   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:16.449114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:16.449183   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:16.491877   62139 cri.go:89] found id: ""
	I0416 01:01:16.491909   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.491918   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:16.491924   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:16.491981   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:16.532522   62139 cri.go:89] found id: ""
	I0416 01:01:16.532553   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.532564   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:16.532572   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:16.532633   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:16.572194   62139 cri.go:89] found id: ""
	I0416 01:01:16.572222   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.572233   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:16.572240   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:16.572302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:16.614671   62139 cri.go:89] found id: ""
	I0416 01:01:16.614697   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.614704   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:16.614712   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:16.614726   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:16.632146   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:16.632179   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:16.707597   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:16.707621   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:16.707633   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:16.783604   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:16.783640   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:16.828937   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:16.828977   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:12.721088   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:15.220256   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:17.222263   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:13.830983   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:16.329120   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:15.119386   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:17.120038   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:19.385008   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:19.400949   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:19.401035   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:19.463792   62139 cri.go:89] found id: ""
	I0416 01:01:19.463825   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.463836   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:19.463843   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:19.463910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:19.523289   62139 cri.go:89] found id: ""
	I0416 01:01:19.523322   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.523332   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:19.523340   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:19.523392   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:19.558891   62139 cri.go:89] found id: ""
	I0416 01:01:19.558928   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.558939   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:19.558946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:19.559009   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:19.597876   62139 cri.go:89] found id: ""
	I0416 01:01:19.597905   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.597917   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:19.597925   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:19.597980   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:19.637536   62139 cri.go:89] found id: ""
	I0416 01:01:19.637563   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.637571   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:19.637576   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:19.637623   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:19.674414   62139 cri.go:89] found id: ""
	I0416 01:01:19.674447   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.674458   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:19.674465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:19.674525   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:19.709717   62139 cri.go:89] found id: ""
	I0416 01:01:19.709751   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.709761   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:19.709769   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:19.709837   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:19.747458   62139 cri.go:89] found id: ""
	I0416 01:01:19.747482   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.747489   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:19.747505   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:19.747523   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:19.834811   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:19.834846   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:19.876398   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:19.876428   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:19.931596   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:19.931632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:19.947074   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:19.947103   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:20.023434   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:19.720883   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:21.721969   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:18.829276   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:20.829405   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:19.120254   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:21.120520   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:22.524036   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:22.539399   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:22.539488   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:22.574696   62139 cri.go:89] found id: ""
	I0416 01:01:22.574723   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.574733   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:22.574741   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:22.574805   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:22.617474   62139 cri.go:89] found id: ""
	I0416 01:01:22.617503   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.617514   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:22.617521   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:22.617579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:22.657744   62139 cri.go:89] found id: ""
	I0416 01:01:22.657773   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.657781   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:22.657786   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:22.657842   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:22.695513   62139 cri.go:89] found id: ""
	I0416 01:01:22.695544   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.695552   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:22.695557   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:22.695606   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:22.732943   62139 cri.go:89] found id: ""
	I0416 01:01:22.732973   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.732983   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:22.732990   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:22.733051   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:22.768735   62139 cri.go:89] found id: ""
	I0416 01:01:22.768767   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.768775   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:22.768782   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:22.768842   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:22.804330   62139 cri.go:89] found id: ""
	I0416 01:01:22.804352   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.804361   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:22.804367   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:22.804425   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:22.842165   62139 cri.go:89] found id: ""
	I0416 01:01:22.842192   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.842199   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:22.842207   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:22.842219   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:22.921859   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:22.921880   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:22.921893   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:23.003432   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:23.003468   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:23.045446   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:23.045476   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:23.097327   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:23.097358   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:25.612297   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:25.627489   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:25.627565   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:25.664040   62139 cri.go:89] found id: ""
	I0416 01:01:25.664072   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.664083   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:25.664091   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:25.664149   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:25.701004   62139 cri.go:89] found id: ""
	I0416 01:01:25.701029   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.701036   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:25.701042   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:25.701087   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:25.740108   62139 cri.go:89] found id: ""
	I0416 01:01:25.740136   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.740144   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:25.740150   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:25.740194   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:25.778413   62139 cri.go:89] found id: ""
	I0416 01:01:25.778447   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.778458   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:25.778465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:25.778530   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:25.815188   62139 cri.go:89] found id: ""
	I0416 01:01:25.815215   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.815223   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:25.815230   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:25.815277   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:25.856370   62139 cri.go:89] found id: ""
	I0416 01:01:25.856402   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.856410   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:25.856416   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:25.856476   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:25.895363   62139 cri.go:89] found id: ""
	I0416 01:01:25.895388   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.895396   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:25.895402   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:25.895455   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:25.931854   62139 cri.go:89] found id: ""
	I0416 01:01:25.931881   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.931889   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:25.931897   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:25.931923   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:26.008395   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:26.008419   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:26.008436   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:26.087946   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:26.087983   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:26.134693   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:26.134725   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:26.189618   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:26.189652   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:24.220798   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:26.221193   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:22.833917   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:25.331147   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:27.331702   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:23.620819   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:25.621119   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:28.705010   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:28.719575   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:28.719644   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:28.759011   62139 cri.go:89] found id: ""
	I0416 01:01:28.759037   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.759044   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:28.759050   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:28.759112   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:28.794640   62139 cri.go:89] found id: ""
	I0416 01:01:28.794675   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.794687   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:28.794695   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:28.794807   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:28.835634   62139 cri.go:89] found id: ""
	I0416 01:01:28.835663   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.835674   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:28.835681   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:28.835747   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:28.875384   62139 cri.go:89] found id: ""
	I0416 01:01:28.875408   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.875426   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:28.875433   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:28.875484   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:28.921202   62139 cri.go:89] found id: ""
	I0416 01:01:28.921234   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.921244   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:28.921252   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:28.921314   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:28.958791   62139 cri.go:89] found id: ""
	I0416 01:01:28.958820   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.958828   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:28.958834   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:28.958923   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:28.996136   62139 cri.go:89] found id: ""
	I0416 01:01:28.996168   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.996179   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:28.996185   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:28.996259   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:29.033912   62139 cri.go:89] found id: ""
	I0416 01:01:29.033939   62139 logs.go:276] 0 containers: []
	W0416 01:01:29.033946   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:29.033954   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:29.033969   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:29.114162   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:29.114209   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:29.153934   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:29.153965   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:29.207548   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:29.207584   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:29.222158   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:29.222184   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:29.297414   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:31.798026   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:31.812740   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:31.812815   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:31.855058   62139 cri.go:89] found id: ""
	I0416 01:01:31.855087   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.855098   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:31.855105   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:31.855172   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:31.897128   62139 cri.go:89] found id: ""
	I0416 01:01:31.897170   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.897192   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:31.897200   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:31.897259   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:31.934497   62139 cri.go:89] found id: ""
	I0416 01:01:31.934520   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.934532   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:31.934541   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:31.934588   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:31.974020   62139 cri.go:89] found id: ""
	I0416 01:01:31.974051   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.974062   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:31.974093   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:31.974163   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:32.015433   62139 cri.go:89] found id: ""
	I0416 01:01:32.015460   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.015471   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:32.015477   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:32.015540   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:32.058286   62139 cri.go:89] found id: ""
	I0416 01:01:32.058336   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.058345   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:32.058351   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:32.058408   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:28.720596   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:30.720732   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:29.828996   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:31.830765   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:28.121038   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:30.619604   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:32.620210   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:32.100331   62139 cri.go:89] found id: ""
	I0416 01:01:32.102041   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.102054   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:32.102061   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:32.102115   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:32.141420   62139 cri.go:89] found id: ""
	I0416 01:01:32.141446   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.141454   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:32.141462   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:32.141473   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:32.195323   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:32.195364   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:32.210180   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:32.210206   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:32.282548   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:32.282570   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:32.282585   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:32.360627   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:32.360663   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:34.901239   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:34.917097   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:34.917205   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:34.959297   62139 cri.go:89] found id: ""
	I0416 01:01:34.959327   62139 logs.go:276] 0 containers: []
	W0416 01:01:34.959337   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:34.959344   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:34.959422   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:35.000927   62139 cri.go:89] found id: ""
	I0416 01:01:35.000974   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.000984   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:35.001000   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:35.001064   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:35.038049   62139 cri.go:89] found id: ""
	I0416 01:01:35.038073   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.038082   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:35.038090   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:35.038143   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:35.075396   62139 cri.go:89] found id: ""
	I0416 01:01:35.075467   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.075481   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:35.075490   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:35.075591   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:35.114297   62139 cri.go:89] found id: ""
	I0416 01:01:35.114325   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.114335   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:35.114343   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:35.114405   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:35.152075   62139 cri.go:89] found id: ""
	I0416 01:01:35.152099   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.152106   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:35.152112   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:35.152161   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:35.187945   62139 cri.go:89] found id: ""
	I0416 01:01:35.187974   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.187984   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:35.187991   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:35.188057   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:35.225225   62139 cri.go:89] found id: ""
	I0416 01:01:35.225253   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.225262   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:35.225272   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:35.225287   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:35.279584   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:35.279628   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:35.293416   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:35.293456   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:35.370122   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:35.370147   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:35.370159   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:35.451482   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:35.451517   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:32.723226   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:35.221390   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:34.329009   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:36.329761   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:34.620492   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:36.620527   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:37.994358   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:38.008209   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:38.008277   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:38.047905   62139 cri.go:89] found id: ""
	I0416 01:01:38.047943   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.047955   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:38.047962   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:38.048016   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:38.085749   62139 cri.go:89] found id: ""
	I0416 01:01:38.085780   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.085790   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:38.085797   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:38.085864   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:38.122396   62139 cri.go:89] found id: ""
	I0416 01:01:38.122419   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.122427   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:38.122432   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:38.122479   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:38.159284   62139 cri.go:89] found id: ""
	I0416 01:01:38.159313   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.159322   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:38.159329   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:38.159390   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:38.193245   62139 cri.go:89] found id: ""
	I0416 01:01:38.193280   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.193291   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:38.193298   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:38.193362   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:38.229147   62139 cri.go:89] found id: ""
	I0416 01:01:38.229179   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.229188   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:38.229194   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:38.229251   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:38.267285   62139 cri.go:89] found id: ""
	I0416 01:01:38.267309   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.267317   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:38.267321   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:38.267389   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:38.305181   62139 cri.go:89] found id: ""
	I0416 01:01:38.305207   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.305215   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:38.305222   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:38.305237   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:38.321714   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:38.321742   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:38.398352   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:38.398372   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:38.398382   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:38.474095   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:38.474129   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:38.520540   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:38.520581   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:41.072083   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:41.086767   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:41.086860   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:41.125119   62139 cri.go:89] found id: ""
	I0416 01:01:41.125149   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.125175   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:41.125182   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:41.125253   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:41.159885   62139 cri.go:89] found id: ""
	I0416 01:01:41.159915   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.159925   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:41.159931   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:41.160012   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:41.196334   62139 cri.go:89] found id: ""
	I0416 01:01:41.196366   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.196377   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:41.196385   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:41.196447   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:41.234254   62139 cri.go:89] found id: ""
	I0416 01:01:41.234282   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.234300   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:41.234319   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:41.234413   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:41.271499   62139 cri.go:89] found id: ""
	I0416 01:01:41.271523   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.271531   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:41.271536   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:41.271604   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:41.311064   62139 cri.go:89] found id: ""
	I0416 01:01:41.311096   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.311107   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:41.311114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:41.311179   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:41.349012   62139 cri.go:89] found id: ""
	I0416 01:01:41.349043   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.349053   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:41.349060   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:41.349117   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:41.385258   62139 cri.go:89] found id: ""
	I0416 01:01:41.385298   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.385305   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:41.385315   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:41.385330   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:41.470086   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:41.470130   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:41.513835   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:41.513870   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:41.565980   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:41.566013   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:41.582647   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:41.582678   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:41.658928   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:37.724628   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:40.222025   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:38.329899   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:40.330143   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:39.120850   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:41.121383   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:44.159107   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:44.173015   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:44.173088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:44.214310   62139 cri.go:89] found id: ""
	I0416 01:01:44.214345   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.214363   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:44.214374   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:44.214462   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:44.256476   62139 cri.go:89] found id: ""
	I0416 01:01:44.256503   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.256511   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:44.256516   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:44.256577   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:44.298047   62139 cri.go:89] found id: ""
	I0416 01:01:44.298079   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.298089   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:44.298097   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:44.298158   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:44.339165   62139 cri.go:89] found id: ""
	I0416 01:01:44.339196   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.339206   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:44.339213   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:44.339280   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:44.378078   62139 cri.go:89] found id: ""
	I0416 01:01:44.378108   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.378116   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:44.378122   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:44.378170   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:44.421494   62139 cri.go:89] found id: ""
	I0416 01:01:44.421525   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.421536   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:44.421543   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:44.421609   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:44.459919   62139 cri.go:89] found id: ""
	I0416 01:01:44.459948   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.459958   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:44.459965   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:44.460025   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:44.499448   62139 cri.go:89] found id: ""
	I0416 01:01:44.499479   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.499489   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:44.499500   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:44.499516   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:44.555122   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:44.555159   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:44.572048   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:44.572075   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:44.646252   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:44.646283   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:44.646299   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:44.730593   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:44.730620   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:42.720855   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:44.723141   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:46.723452   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:42.831045   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:45.329039   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:47.331355   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:43.619897   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:45.620068   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:47.620162   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:47.276658   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:47.291354   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:47.291431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:47.334998   62139 cri.go:89] found id: ""
	I0416 01:01:47.335036   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.335055   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:47.335062   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:47.335121   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:47.376546   62139 cri.go:89] found id: ""
	I0416 01:01:47.376575   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.376582   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:47.376587   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:47.376647   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:47.418609   62139 cri.go:89] found id: ""
	I0416 01:01:47.418642   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.418654   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:47.418661   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:47.418721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:47.459432   62139 cri.go:89] found id: ""
	I0416 01:01:47.459458   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.459465   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:47.459470   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:47.459518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:47.497776   62139 cri.go:89] found id: ""
	I0416 01:01:47.497800   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.497808   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:47.497813   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:47.497866   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:47.536803   62139 cri.go:89] found id: ""
	I0416 01:01:47.536835   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.536842   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:47.536849   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:47.536916   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:47.575883   62139 cri.go:89] found id: ""
	I0416 01:01:47.575916   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.575923   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:47.575931   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:47.575976   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:47.627676   62139 cri.go:89] found id: ""
	I0416 01:01:47.627697   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.627703   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:47.627711   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:47.627725   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:47.669714   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:47.669745   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:47.721349   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:47.721389   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:47.735833   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:47.735859   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:47.806890   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:47.806913   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:47.806925   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:50.386960   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:50.400832   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:50.400901   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:50.443042   62139 cri.go:89] found id: ""
	I0416 01:01:50.443076   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.443086   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:50.443094   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:50.443157   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:50.480495   62139 cri.go:89] found id: ""
	I0416 01:01:50.480526   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.480536   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:50.480544   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:50.480602   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:50.516578   62139 cri.go:89] found id: ""
	I0416 01:01:50.516605   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.516613   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:50.516618   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:50.516676   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:50.555302   62139 cri.go:89] found id: ""
	I0416 01:01:50.555330   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.555337   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:50.555344   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:50.555388   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:50.594647   62139 cri.go:89] found id: ""
	I0416 01:01:50.594674   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.594682   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:50.594688   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:50.594737   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:50.633401   62139 cri.go:89] found id: ""
	I0416 01:01:50.633428   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.633436   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:50.633442   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:50.633501   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:50.673714   62139 cri.go:89] found id: ""
	I0416 01:01:50.673744   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.673755   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:50.673763   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:50.673811   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:50.710103   62139 cri.go:89] found id: ""
	I0416 01:01:50.710127   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.710134   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:50.710142   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:50.710153   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:50.765121   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:50.765168   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:50.780407   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:50.780436   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:50.855602   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:50.855635   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:50.855663   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:50.937249   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:50.937283   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:49.220483   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:51.724129   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:49.829742   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:52.330579   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:49.621383   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:52.120841   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:53.481261   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:53.495872   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:53.495931   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:53.532710   62139 cri.go:89] found id: ""
	I0416 01:01:53.532738   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.532748   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:53.532756   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:53.532815   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:53.568734   62139 cri.go:89] found id: ""
	I0416 01:01:53.568763   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.568770   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:53.568776   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:53.568841   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:53.608937   62139 cri.go:89] found id: ""
	I0416 01:01:53.608965   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.608976   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:53.608984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:53.609042   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:53.646538   62139 cri.go:89] found id: ""
	I0416 01:01:53.646573   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.646585   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:53.646592   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:53.646657   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:53.687761   62139 cri.go:89] found id: ""
	I0416 01:01:53.687792   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.687801   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:53.687809   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:53.687872   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:53.726126   62139 cri.go:89] found id: ""
	I0416 01:01:53.726161   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.726169   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:53.726174   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:53.726224   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:53.762583   62139 cri.go:89] found id: ""
	I0416 01:01:53.762609   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.762618   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:53.762625   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:53.762695   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:53.803685   62139 cri.go:89] found id: ""
	I0416 01:01:53.803715   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.803726   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:53.803737   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:53.803751   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:53.862215   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:53.862255   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:53.877713   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:53.877743   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:53.953394   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:53.953422   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:53.953438   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:54.044657   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:54.044698   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:56.602100   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:56.616548   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:56.616632   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:56.653765   62139 cri.go:89] found id: ""
	I0416 01:01:56.653794   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.653810   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:56.653817   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:56.653879   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:56.691394   62139 cri.go:89] found id: ""
	I0416 01:01:56.691416   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.691422   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:56.691428   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:56.691475   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:56.728995   62139 cri.go:89] found id: ""
	I0416 01:01:56.729017   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.729024   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:56.729029   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:56.729078   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:56.769119   62139 cri.go:89] found id: ""
	I0416 01:01:56.769184   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.769196   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:56.769204   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:56.769270   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:56.810562   62139 cri.go:89] found id: ""
	I0416 01:01:56.810589   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.810597   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:56.810608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:56.810669   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:56.849367   62139 cri.go:89] found id: ""
	I0416 01:01:56.849392   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.849399   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:56.849405   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:56.849464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:56.887330   62139 cri.go:89] found id: ""
	I0416 01:01:56.887359   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.887370   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:56.887378   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:56.887461   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:56.926636   62139 cri.go:89] found id: ""
	I0416 01:01:56.926664   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.926672   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:56.926682   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:56.926697   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:56.981836   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:56.981875   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:56.996385   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:56.996411   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:57.071026   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:57.071054   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:57.071070   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:54.219668   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:56.221212   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:54.829549   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:56.831452   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:54.619864   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:56.620968   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:57.155430   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:57.155466   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:59.701547   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:59.714465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:59.714526   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:59.759791   62139 cri.go:89] found id: ""
	I0416 01:01:59.759830   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.759841   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:59.759849   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:59.759914   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:59.813303   62139 cri.go:89] found id: ""
	I0416 01:01:59.813334   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.813343   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:59.813353   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:59.813406   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:59.872291   62139 cri.go:89] found id: ""
	I0416 01:01:59.872328   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.872338   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:59.872347   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:59.872423   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:59.910397   62139 cri.go:89] found id: ""
	I0416 01:01:59.910425   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.910437   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:59.910444   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:59.910512   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:59.953656   62139 cri.go:89] found id: ""
	I0416 01:01:59.953685   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.953695   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:59.953703   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:59.953779   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:59.993193   62139 cri.go:89] found id: ""
	I0416 01:01:59.993220   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.993229   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:59.993239   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:59.993298   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:00.030205   62139 cri.go:89] found id: ""
	I0416 01:02:00.030229   62139 logs.go:276] 0 containers: []
	W0416 01:02:00.030237   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:00.030242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:00.030302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:00.068160   62139 cri.go:89] found id: ""
	I0416 01:02:00.068189   62139 logs.go:276] 0 containers: []
	W0416 01:02:00.068199   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:00.068211   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:00.068226   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:00.149383   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:00.149416   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:00.188000   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:00.188025   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:00.240522   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:00.240550   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:00.254189   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:00.254215   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:00.331483   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:58.721272   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:01.220698   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:59.329440   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:01.830408   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:59.122269   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:01.619839   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:02.832656   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:02.846826   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:02.846907   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:02.883397   62139 cri.go:89] found id: ""
	I0416 01:02:02.883428   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.883439   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:02.883446   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:02.883499   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:02.923686   62139 cri.go:89] found id: ""
	I0416 01:02:02.923708   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.923715   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:02.923719   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:02.923770   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:02.964155   62139 cri.go:89] found id: ""
	I0416 01:02:02.964180   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.964188   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:02.964193   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:02.964247   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:03.005357   62139 cri.go:89] found id: ""
	I0416 01:02:03.005386   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.005396   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:03.005403   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:03.005464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:03.047221   62139 cri.go:89] found id: ""
	I0416 01:02:03.047246   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.047257   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:03.047264   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:03.047326   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:03.088737   62139 cri.go:89] found id: ""
	I0416 01:02:03.088767   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.088776   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:03.088784   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:03.088846   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:03.129756   62139 cri.go:89] found id: ""
	I0416 01:02:03.129778   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.129785   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:03.129790   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:03.129837   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:03.169422   62139 cri.go:89] found id: ""
	I0416 01:02:03.169447   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.169459   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:03.169468   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:03.169478   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:03.246485   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:03.246503   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:03.246514   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:03.326498   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:03.326533   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:03.372788   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:03.372817   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:03.428561   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:03.428603   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:05.944274   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:05.957744   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:05.957813   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:05.993348   62139 cri.go:89] found id: ""
	I0416 01:02:05.993400   62139 logs.go:276] 0 containers: []
	W0416 01:02:05.993411   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:05.993430   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:05.993497   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:06.034811   62139 cri.go:89] found id: ""
	I0416 01:02:06.034848   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.034859   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:06.034866   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:06.034953   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:06.079047   62139 cri.go:89] found id: ""
	I0416 01:02:06.079070   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.079078   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:06.079082   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:06.079127   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:06.122494   62139 cri.go:89] found id: ""
	I0416 01:02:06.122513   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.122520   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:06.122525   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:06.122589   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:06.163436   62139 cri.go:89] found id: ""
	I0416 01:02:06.163461   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.163468   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:06.163473   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:06.163534   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:06.205036   62139 cri.go:89] found id: ""
	I0416 01:02:06.205064   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.205072   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:06.205077   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:06.205134   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:06.242056   62139 cri.go:89] found id: ""
	I0416 01:02:06.242084   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.242094   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:06.242107   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:06.242166   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:06.278604   62139 cri.go:89] found id: ""
	I0416 01:02:06.278636   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.278646   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:06.278656   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:06.278671   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:06.334631   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:06.334658   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:06.348199   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:06.348227   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:06.424774   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:06.424793   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:06.424804   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:06.503509   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:06.503542   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:03.221238   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:05.721006   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:04.329267   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:06.329476   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:03.620957   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:06.121348   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:09.046665   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:09.061072   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:09.061173   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:09.097482   62139 cri.go:89] found id: ""
	I0416 01:02:09.097514   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.097524   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:09.097543   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:09.097613   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:09.135124   62139 cri.go:89] found id: ""
	I0416 01:02:09.135157   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.135168   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:09.135175   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:09.135236   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:09.173887   62139 cri.go:89] found id: ""
	I0416 01:02:09.173912   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.173920   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:09.173925   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:09.173983   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:09.209658   62139 cri.go:89] found id: ""
	I0416 01:02:09.209683   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.209691   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:09.209702   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:09.209763   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:09.249149   62139 cri.go:89] found id: ""
	I0416 01:02:09.249200   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.249209   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:09.249214   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:09.249292   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:09.291447   62139 cri.go:89] found id: ""
	I0416 01:02:09.291477   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.291487   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:09.291494   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:09.291553   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:09.329248   62139 cri.go:89] found id: ""
	I0416 01:02:09.329271   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.329281   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:09.329288   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:09.329345   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:09.365585   62139 cri.go:89] found id: ""
	I0416 01:02:09.365613   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.365622   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:09.365632   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:09.365645   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:09.418998   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:09.419031   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:09.433531   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:09.433558   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:09.508543   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:09.508573   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:09.508588   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:09.593889   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:09.593930   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:08.220704   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:10.221232   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.224680   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:08.330281   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:10.828856   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:08.619632   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:10.619780   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.621319   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.139020   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:12.154268   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:12.154349   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:12.192717   62139 cri.go:89] found id: ""
	I0416 01:02:12.192746   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.192758   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:12.192765   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:12.192832   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:12.230633   62139 cri.go:89] found id: ""
	I0416 01:02:12.230662   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.230674   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:12.230681   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:12.230729   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:12.271108   62139 cri.go:89] found id: ""
	I0416 01:02:12.271150   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.271161   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:12.271168   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:12.271233   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:12.310161   62139 cri.go:89] found id: ""
	I0416 01:02:12.310186   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.310194   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:12.310201   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:12.310272   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:12.349638   62139 cri.go:89] found id: ""
	I0416 01:02:12.349668   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.349678   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:12.349686   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:12.349766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:12.391565   62139 cri.go:89] found id: ""
	I0416 01:02:12.391597   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.391607   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:12.391620   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:12.391681   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:12.429142   62139 cri.go:89] found id: ""
	I0416 01:02:12.429186   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.429195   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:12.429200   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:12.429249   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:12.466209   62139 cri.go:89] found id: ""
	I0416 01:02:12.466238   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.466249   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:12.466260   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:12.466277   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:12.551333   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:12.551355   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:12.551367   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:12.634465   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:12.634496   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:12.675198   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:12.675231   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:12.728933   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:12.728962   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:15.243521   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:15.258589   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:15.258657   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:15.301901   62139 cri.go:89] found id: ""
	I0416 01:02:15.301931   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.301943   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:15.301951   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:15.302006   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:15.345932   62139 cri.go:89] found id: ""
	I0416 01:02:15.346011   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.346032   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:15.346043   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:15.346113   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:15.387957   62139 cri.go:89] found id: ""
	I0416 01:02:15.387983   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.387991   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:15.387996   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:15.388044   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:15.424887   62139 cri.go:89] found id: ""
	I0416 01:02:15.424916   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.424927   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:15.424934   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:15.424996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:15.460088   62139 cri.go:89] found id: ""
	I0416 01:02:15.460113   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.460120   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:15.460125   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:15.460172   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:15.495567   62139 cri.go:89] found id: ""
	I0416 01:02:15.495597   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.495607   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:15.495615   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:15.495692   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:15.533901   62139 cri.go:89] found id: ""
	I0416 01:02:15.533931   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.533940   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:15.533946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:15.533996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:15.576665   62139 cri.go:89] found id: ""
	I0416 01:02:15.576692   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.576702   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:15.576712   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:15.576728   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:15.626933   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:15.626961   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:15.681627   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:15.681656   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:15.695572   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:15.695608   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:15.768910   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:15.768934   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:15.768945   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:14.720472   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:16.722418   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.830086   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:14.830540   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:17.329838   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:15.120394   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:17.120523   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:18.349776   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:18.363499   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:18.363568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:18.404210   62139 cri.go:89] found id: ""
	I0416 01:02:18.404234   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.404241   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:18.404246   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:18.404304   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:18.444610   62139 cri.go:89] found id: ""
	I0416 01:02:18.444641   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.444651   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:18.444658   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:18.444722   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:18.483134   62139 cri.go:89] found id: ""
	I0416 01:02:18.483160   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.483168   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:18.483173   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:18.483220   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:18.522120   62139 cri.go:89] found id: ""
	I0416 01:02:18.522144   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.522156   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:18.522161   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:18.522205   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:18.566293   62139 cri.go:89] found id: ""
	I0416 01:02:18.566319   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.566327   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:18.566332   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:18.566391   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:18.604000   62139 cri.go:89] found id: ""
	I0416 01:02:18.604028   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.604036   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:18.604042   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:18.604089   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:18.641967   62139 cri.go:89] found id: ""
	I0416 01:02:18.641999   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.642009   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:18.642016   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:18.642080   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:18.683494   62139 cri.go:89] found id: ""
	I0416 01:02:18.683533   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.683544   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:18.683555   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:18.683570   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:18.761674   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:18.761699   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:18.761714   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:18.849959   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:18.849995   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:18.895534   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:18.895570   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:18.949287   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:18.949320   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:21.464393   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:21.479019   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:21.479087   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:21.516262   62139 cri.go:89] found id: ""
	I0416 01:02:21.516303   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.516313   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:21.516323   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:21.516385   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:21.554279   62139 cri.go:89] found id: ""
	I0416 01:02:21.554315   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.554327   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:21.554334   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:21.554393   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:21.590889   62139 cri.go:89] found id: ""
	I0416 01:02:21.590918   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.590928   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:21.590935   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:21.590996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:21.629925   62139 cri.go:89] found id: ""
	I0416 01:02:21.629955   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.629965   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:21.629972   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:21.630032   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:21.667947   62139 cri.go:89] found id: ""
	I0416 01:02:21.667975   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.667983   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:21.667988   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:21.668045   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:21.706275   62139 cri.go:89] found id: ""
	I0416 01:02:21.706308   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.706318   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:21.706326   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:21.706392   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:21.748077   62139 cri.go:89] found id: ""
	I0416 01:02:21.748106   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.748117   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:21.748123   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:21.748170   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:21.785441   62139 cri.go:89] found id: ""
	I0416 01:02:21.785467   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.785477   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:21.785488   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:21.785510   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:21.824702   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:21.824735   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:21.882780   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:21.882810   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:21.897211   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:21.897236   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:21.971882   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:21.971903   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:21.971915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:19.220913   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:21.721219   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:19.330086   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:21.836759   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:19.620521   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:21.621229   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:24.550749   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:24.564951   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:24.565024   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:24.605025   62139 cri.go:89] found id: ""
	I0416 01:02:24.605055   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.605063   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:24.605068   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:24.605142   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:24.640727   62139 cri.go:89] found id: ""
	I0416 01:02:24.640757   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.640764   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:24.640769   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:24.640822   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:24.678031   62139 cri.go:89] found id: ""
	I0416 01:02:24.678060   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.678068   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:24.678074   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:24.678125   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:24.714854   62139 cri.go:89] found id: ""
	I0416 01:02:24.714896   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.714907   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:24.714914   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:24.714981   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:24.752129   62139 cri.go:89] found id: ""
	I0416 01:02:24.752158   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.752168   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:24.752177   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:24.752243   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:24.788507   62139 cri.go:89] found id: ""
	I0416 01:02:24.788541   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.788551   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:24.788557   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:24.788617   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:24.828379   62139 cri.go:89] found id: ""
	I0416 01:02:24.828409   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.828419   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:24.828427   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:24.828486   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:24.865676   62139 cri.go:89] found id: ""
	I0416 01:02:24.865707   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.865717   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:24.865725   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:24.865736   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:24.941057   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:24.941079   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:24.941091   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:25.025937   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:25.025979   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:25.065828   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:25.065871   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:25.128004   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:25.128039   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:24.221435   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:26.720181   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:24.329677   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:26.329901   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:24.119781   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:26.120316   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:27.643201   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:27.658601   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:27.658660   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:27.700627   62139 cri.go:89] found id: ""
	I0416 01:02:27.700650   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.700657   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:27.700662   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:27.700718   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:27.734929   62139 cri.go:89] found id: ""
	I0416 01:02:27.734957   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.734966   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:27.734975   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:27.735046   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:27.772412   62139 cri.go:89] found id: ""
	I0416 01:02:27.772440   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.772448   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:27.772454   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:27.772514   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:27.809436   62139 cri.go:89] found id: ""
	I0416 01:02:27.809459   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.809466   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:27.809471   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:27.809518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:27.845717   62139 cri.go:89] found id: ""
	I0416 01:02:27.845746   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.845756   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:27.845764   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:27.845825   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:27.887224   62139 cri.go:89] found id: ""
	I0416 01:02:27.887250   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.887260   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:27.887267   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:27.887334   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:27.920945   62139 cri.go:89] found id: ""
	I0416 01:02:27.920974   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.920984   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:27.920992   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:27.921066   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:27.960933   62139 cri.go:89] found id: ""
	I0416 01:02:27.960959   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.960966   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:27.960974   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:27.960985   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:28.013003   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:28.013033   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:28.026599   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:28.026626   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:28.117200   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:28.117226   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:28.117240   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:28.198003   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:28.198036   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:30.741379   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:30.757102   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:30.757199   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:30.798038   62139 cri.go:89] found id: ""
	I0416 01:02:30.798068   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.798075   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:30.798080   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:30.798137   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:30.844840   62139 cri.go:89] found id: ""
	I0416 01:02:30.844862   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.844871   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:30.844877   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:30.844944   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:30.883816   62139 cri.go:89] found id: ""
	I0416 01:02:30.883841   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.883849   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:30.883855   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:30.883903   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:30.919353   62139 cri.go:89] found id: ""
	I0416 01:02:30.919380   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.919389   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:30.919396   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:30.919457   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:30.957036   62139 cri.go:89] found id: ""
	I0416 01:02:30.957061   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.957069   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:30.957084   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:30.957143   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:30.993179   62139 cri.go:89] found id: ""
	I0416 01:02:30.993211   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.993220   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:30.993228   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:30.993315   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:31.032634   62139 cri.go:89] found id: ""
	I0416 01:02:31.032661   62139 logs.go:276] 0 containers: []
	W0416 01:02:31.032670   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:31.032684   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:31.032753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:31.069345   62139 cri.go:89] found id: ""
	I0416 01:02:31.069373   62139 logs.go:276] 0 containers: []
	W0416 01:02:31.069382   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:31.069392   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:31.069408   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:31.123989   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:31.124017   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:31.140998   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:31.141032   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:31.217496   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:31.218063   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:31.218098   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:31.296811   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:31.296858   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:28.720502   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:30.720709   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:28.329978   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:30.829406   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:28.121200   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:30.620659   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:33.842516   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:33.872440   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:33.872518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:33.909287   62139 cri.go:89] found id: ""
	I0416 01:02:33.909314   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.909324   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:33.909329   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:33.909388   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:33.947531   62139 cri.go:89] found id: ""
	I0416 01:02:33.947566   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.947576   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:33.947584   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:33.947642   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:33.990084   62139 cri.go:89] found id: ""
	I0416 01:02:33.990118   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.990129   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:33.990136   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:33.990200   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:34.024121   62139 cri.go:89] found id: ""
	I0416 01:02:34.024151   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.024159   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:34.024165   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:34.024218   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:34.061075   62139 cri.go:89] found id: ""
	I0416 01:02:34.061104   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.061111   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:34.061116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:34.061179   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:34.097887   62139 cri.go:89] found id: ""
	I0416 01:02:34.097928   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.097938   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:34.097946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:34.098007   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:34.135541   62139 cri.go:89] found id: ""
	I0416 01:02:34.135567   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.135577   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:34.135585   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:34.135637   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:34.170884   62139 cri.go:89] found id: ""
	I0416 01:02:34.170910   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.170920   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:34.170931   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:34.170946   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:34.223465   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:34.223494   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:34.238898   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:34.238929   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:34.316916   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:34.316946   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:34.316962   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:34.401564   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:34.401600   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:36.945789   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:36.959707   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:36.959774   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:36.994463   62139 cri.go:89] found id: ""
	I0416 01:02:36.994497   62139 logs.go:276] 0 containers: []
	W0416 01:02:36.994508   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:36.994515   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:36.994579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:37.028847   62139 cri.go:89] found id: ""
	I0416 01:02:37.028877   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.028887   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:37.028893   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:37.028954   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:37.061841   62139 cri.go:89] found id: ""
	I0416 01:02:37.061872   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.061882   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:37.061889   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:37.061954   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:37.098460   62139 cri.go:89] found id: ""
	I0416 01:02:37.098485   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.098495   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:37.098502   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:37.098569   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:33.220794   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:35.221650   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:37.222563   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:32.829517   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:34.829762   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:36.831773   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:33.121842   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:35.620647   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:37.620795   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:37.133016   62139 cri.go:89] found id: ""
	I0416 01:02:37.133044   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.133053   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:37.133059   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:37.133122   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:37.170252   62139 cri.go:89] found id: ""
	I0416 01:02:37.170276   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.170286   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:37.170293   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:37.170354   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:37.206114   62139 cri.go:89] found id: ""
	I0416 01:02:37.206141   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.206148   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:37.206153   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:37.206208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:37.241353   62139 cri.go:89] found id: ""
	I0416 01:02:37.241383   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.241395   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:37.241405   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:37.241429   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:37.293452   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:37.293483   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:37.309885   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:37.309926   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:37.385455   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:37.385481   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:37.385496   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:37.463064   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:37.463101   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:40.008717   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:40.022249   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:40.022327   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:40.064444   62139 cri.go:89] found id: ""
	I0416 01:02:40.064479   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.064490   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:40.064497   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:40.064545   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:40.100326   62139 cri.go:89] found id: ""
	I0416 01:02:40.100353   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.100361   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:40.100366   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:40.100413   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:40.138818   62139 cri.go:89] found id: ""
	I0416 01:02:40.138857   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.138869   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:40.138878   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:40.138928   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:40.184203   62139 cri.go:89] found id: ""
	I0416 01:02:40.184234   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.184244   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:40.184252   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:40.184311   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:40.221968   62139 cri.go:89] found id: ""
	I0416 01:02:40.221991   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.221998   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:40.222007   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:40.222088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:40.265621   62139 cri.go:89] found id: ""
	I0416 01:02:40.265643   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.265650   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:40.265657   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:40.265723   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:40.314121   62139 cri.go:89] found id: ""
	I0416 01:02:40.314152   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.314163   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:40.314170   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:40.314229   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:40.359788   62139 cri.go:89] found id: ""
	I0416 01:02:40.359825   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.359836   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:40.359849   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:40.359863   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:40.431678   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:40.431718   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:40.449847   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:40.449877   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:40.524271   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:40.524297   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:40.524309   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:40.601398   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:40.601433   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:39.720606   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:41.721437   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:39.330974   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:41.830050   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:40.120785   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:42.123996   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:43.145431   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:43.160269   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:43.160338   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:43.196603   62139 cri.go:89] found id: ""
	I0416 01:02:43.196637   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.196648   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:43.196655   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:43.196716   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:43.235863   62139 cri.go:89] found id: ""
	I0416 01:02:43.235893   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.235905   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:43.235911   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:43.235971   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:43.271408   62139 cri.go:89] found id: ""
	I0416 01:02:43.271437   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.271444   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:43.271450   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:43.271512   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:43.310931   62139 cri.go:89] found id: ""
	I0416 01:02:43.310958   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.310965   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:43.310971   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:43.311032   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:43.347472   62139 cri.go:89] found id: ""
	I0416 01:02:43.347502   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.347512   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:43.347520   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:43.347581   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:43.387326   62139 cri.go:89] found id: ""
	I0416 01:02:43.387361   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.387372   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:43.387429   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:43.387506   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:43.425099   62139 cri.go:89] found id: ""
	I0416 01:02:43.425122   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.425130   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:43.425141   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:43.425208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:43.461364   62139 cri.go:89] found id: ""
	I0416 01:02:43.461397   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.461408   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:43.461419   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:43.461434   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:43.514520   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:43.514556   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:43.528740   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:43.528777   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:43.599010   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:43.599035   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:43.599051   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:43.682913   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:43.682959   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:46.231398   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:46.260247   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:46.260338   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:46.304498   62139 cri.go:89] found id: ""
	I0416 01:02:46.304521   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.304528   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:46.304534   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:46.304600   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:46.364055   62139 cri.go:89] found id: ""
	I0416 01:02:46.364081   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.364090   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:46.364098   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:46.364167   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:46.412395   62139 cri.go:89] found id: ""
	I0416 01:02:46.412437   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.412475   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:46.412510   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:46.412584   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:46.453669   62139 cri.go:89] found id: ""
	I0416 01:02:46.453698   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.453709   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:46.453716   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:46.453766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:46.490667   62139 cri.go:89] found id: ""
	I0416 01:02:46.490699   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.490709   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:46.490715   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:46.490766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:46.529405   62139 cri.go:89] found id: ""
	I0416 01:02:46.529443   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.529460   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:46.529467   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:46.529527   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:46.565359   62139 cri.go:89] found id: ""
	I0416 01:02:46.565384   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.565391   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:46.565396   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:46.565451   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:46.609381   62139 cri.go:89] found id: ""
	I0416 01:02:46.609406   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.609413   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:46.609421   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:46.609432   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:46.663080   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:46.663112   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:46.677303   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:46.677338   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:46.750134   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:46.750163   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:46.750175   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:46.829395   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:46.829434   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:43.721477   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:46.220462   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:43.831829   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:46.329333   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:44.619712   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:46.621271   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:49.374356   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:49.390674   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:49.390753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:49.427968   62139 cri.go:89] found id: ""
	I0416 01:02:49.427993   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.428000   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:49.428005   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:49.428058   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:49.461821   62139 cri.go:89] found id: ""
	I0416 01:02:49.461850   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.461857   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:49.461863   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:49.461918   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:49.496305   62139 cri.go:89] found id: ""
	I0416 01:02:49.496356   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.496364   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:49.496369   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:49.496429   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:49.536096   62139 cri.go:89] found id: ""
	I0416 01:02:49.536122   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.536129   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:49.536134   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:49.536194   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:49.572078   62139 cri.go:89] found id: ""
	I0416 01:02:49.572106   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.572115   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:49.572122   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:49.572181   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:49.607803   62139 cri.go:89] found id: ""
	I0416 01:02:49.607835   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.607847   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:49.607861   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:49.607915   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:49.651245   62139 cri.go:89] found id: ""
	I0416 01:02:49.651272   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.651280   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:49.651285   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:49.651332   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:49.693587   62139 cri.go:89] found id: ""
	I0416 01:02:49.693612   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.693622   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:49.693632   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:49.693646   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:49.750003   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:49.750032   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:49.764447   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:49.764472   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:49.844739   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:49.844764   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:49.844780   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:49.924260   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:49.924294   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:48.220753   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:50.220986   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:48.330946   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:50.829409   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:49.120516   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:51.619516   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:52.467399   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:52.481656   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:52.481729   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:52.518506   62139 cri.go:89] found id: ""
	I0416 01:02:52.518531   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.518537   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:52.518544   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:52.518599   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:52.554799   62139 cri.go:89] found id: ""
	I0416 01:02:52.554820   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.554827   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:52.554832   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:52.554888   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:52.597236   62139 cri.go:89] found id: ""
	I0416 01:02:52.597265   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.597272   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:52.597278   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:52.597335   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:52.635544   62139 cri.go:89] found id: ""
	I0416 01:02:52.635567   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.635578   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:52.635585   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:52.635639   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:52.672715   62139 cri.go:89] found id: ""
	I0416 01:02:52.672739   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.672746   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:52.672751   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:52.672808   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:52.711600   62139 cri.go:89] found id: ""
	I0416 01:02:52.711631   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.711640   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:52.711648   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:52.711718   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:52.750372   62139 cri.go:89] found id: ""
	I0416 01:02:52.750405   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.750416   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:52.750423   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:52.750486   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:52.786651   62139 cri.go:89] found id: ""
	I0416 01:02:52.786678   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.786688   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:52.786698   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:52.786712   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:52.840262   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:52.840296   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:52.854734   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:52.854762   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:52.931182   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:52.931211   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:52.931226   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:53.007023   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:53.007061   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:55.548305   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:55.562483   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:55.562562   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:55.599480   62139 cri.go:89] found id: ""
	I0416 01:02:55.599504   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.599511   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:55.599517   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:55.599573   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:55.636832   62139 cri.go:89] found id: ""
	I0416 01:02:55.636862   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.636873   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:55.636879   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:55.636940   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:55.676211   62139 cri.go:89] found id: ""
	I0416 01:02:55.676240   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.676250   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:55.676256   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:55.676318   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:55.713498   62139 cri.go:89] found id: ""
	I0416 01:02:55.713527   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.713537   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:55.713544   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:55.713604   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:55.754239   62139 cri.go:89] found id: ""
	I0416 01:02:55.754276   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.754284   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:55.754301   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:55.754355   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:55.792073   62139 cri.go:89] found id: ""
	I0416 01:02:55.792106   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.792117   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:55.792125   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:55.792191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:55.829635   62139 cri.go:89] found id: ""
	I0416 01:02:55.829665   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.829676   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:55.829683   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:55.829742   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:55.876417   62139 cri.go:89] found id: ""
	I0416 01:02:55.876443   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.876450   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:55.876458   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:55.876471   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:55.926670   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:55.926707   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:55.941660   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:55.941696   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:56.018776   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:56.018806   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:56.018820   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:56.097335   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:56.097378   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:52.720703   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:55.221614   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:52.830970   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:55.329886   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:53.620969   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:56.122135   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:58.642188   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:58.655537   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:58.655605   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:58.692091   62139 cri.go:89] found id: ""
	I0416 01:02:58.692116   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.692124   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:58.692129   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:58.692191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:58.729434   62139 cri.go:89] found id: ""
	I0416 01:02:58.729461   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.729472   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:58.729491   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:58.729568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:58.765879   62139 cri.go:89] found id: ""
	I0416 01:02:58.765907   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.765916   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:58.765924   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:58.765987   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:58.802285   62139 cri.go:89] found id: ""
	I0416 01:02:58.802323   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.802334   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:58.802342   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:58.802399   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:58.841357   62139 cri.go:89] found id: ""
	I0416 01:02:58.841385   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.841396   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:58.841403   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:58.841464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:58.876982   62139 cri.go:89] found id: ""
	I0416 01:02:58.877022   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.877032   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:58.877040   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:58.877108   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:58.915563   62139 cri.go:89] found id: ""
	I0416 01:02:58.915596   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.915607   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:58.915614   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:58.915683   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:58.951268   62139 cri.go:89] found id: ""
	I0416 01:02:58.951303   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.951313   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:58.951324   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:58.951341   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:59.004673   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:59.004710   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:59.019393   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:59.019423   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:59.091587   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:59.091612   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:59.091632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:59.169623   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:59.169655   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:01.710597   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:01.724394   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:01.724463   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:01.761577   62139 cri.go:89] found id: ""
	I0416 01:03:01.761605   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.761616   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:01.761624   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:01.761684   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:01.797467   62139 cri.go:89] found id: ""
	I0416 01:03:01.797498   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.797508   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:01.797515   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:01.797582   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:01.839910   62139 cri.go:89] found id: ""
	I0416 01:03:01.839940   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.839950   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:01.839958   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:01.840019   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:01.879572   62139 cri.go:89] found id: ""
	I0416 01:03:01.879599   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.879611   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:01.879617   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:01.879664   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:01.920190   62139 cri.go:89] found id: ""
	I0416 01:03:01.920222   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.920234   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:01.920242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:01.920300   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:01.957389   62139 cri.go:89] found id: ""
	I0416 01:03:01.957418   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.957428   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:01.957436   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:01.957507   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:01.998730   62139 cri.go:89] found id: ""
	I0416 01:03:01.998754   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.998762   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:01.998767   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:01.998812   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:02.036062   62139 cri.go:89] found id: ""
	I0416 01:03:02.036094   62139 logs.go:276] 0 containers: []
	W0416 01:03:02.036103   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:02.036112   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:02.036125   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:02.089109   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:02.089149   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:57.720792   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:00.219899   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:02.220048   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:57.832016   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:00.328867   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:02.330238   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:58.620416   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:01.121496   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:02.103312   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:02.103342   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:02.174034   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:02.174056   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:02.174069   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:02.249526   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:02.249555   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:04.795314   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:04.808294   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:04.808367   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:04.848795   62139 cri.go:89] found id: ""
	I0416 01:03:04.848825   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.848849   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:04.848857   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:04.848928   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:04.886442   62139 cri.go:89] found id: ""
	I0416 01:03:04.886477   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.886488   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:04.886502   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:04.886572   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:04.929183   62139 cri.go:89] found id: ""
	I0416 01:03:04.929215   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.929226   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:04.929234   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:04.929297   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:04.965134   62139 cri.go:89] found id: ""
	I0416 01:03:04.965172   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.965184   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:04.965191   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:04.965247   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:05.001346   62139 cri.go:89] found id: ""
	I0416 01:03:05.001373   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.001381   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:05.001387   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:05.001434   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:05.039181   62139 cri.go:89] found id: ""
	I0416 01:03:05.039210   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.039219   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:05.039224   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:05.039289   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:05.073451   62139 cri.go:89] found id: ""
	I0416 01:03:05.073479   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.073487   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:05.073494   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:05.073555   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:05.108466   62139 cri.go:89] found id: ""
	I0416 01:03:05.108495   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.108510   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:05.108520   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:05.108537   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:05.162725   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:05.162765   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:05.178152   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:05.178183   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:05.255122   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:05.255147   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:05.255161   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:05.331274   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:05.331309   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:04.220320   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:06.220475   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:04.331381   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:06.830143   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:03.620275   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:06.121293   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:07.882980   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:07.896311   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:07.896372   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:07.934632   62139 cri.go:89] found id: ""
	I0416 01:03:07.934661   62139 logs.go:276] 0 containers: []
	W0416 01:03:07.934671   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:07.934677   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:07.934745   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:07.971463   62139 cri.go:89] found id: ""
	I0416 01:03:07.971495   62139 logs.go:276] 0 containers: []
	W0416 01:03:07.971511   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:07.971518   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:07.971581   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:08.006808   62139 cri.go:89] found id: ""
	I0416 01:03:08.006839   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.006847   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:08.006852   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:08.006912   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:08.043051   62139 cri.go:89] found id: ""
	I0416 01:03:08.043082   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.043089   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:08.043095   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:08.043155   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:08.078602   62139 cri.go:89] found id: ""
	I0416 01:03:08.078638   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.078647   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:08.078655   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:08.078724   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:08.115264   62139 cri.go:89] found id: ""
	I0416 01:03:08.115293   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.115303   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:08.115311   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:08.115378   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:08.152782   62139 cri.go:89] found id: ""
	I0416 01:03:08.152814   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.152821   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:08.152826   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:08.152875   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:08.193484   62139 cri.go:89] found id: ""
	I0416 01:03:08.193506   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.193513   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:08.193522   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:08.193532   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:08.248796   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:08.248831   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:08.266054   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:08.266083   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:08.343470   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:08.343501   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:08.343515   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:08.430335   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:08.430383   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:10.972540   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:10.986911   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:10.986984   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:11.024905   62139 cri.go:89] found id: ""
	I0416 01:03:11.024939   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.024951   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:11.024958   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:11.025011   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:11.058629   62139 cri.go:89] found id: ""
	I0416 01:03:11.058654   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.058662   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:11.058667   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:11.058721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:11.093277   62139 cri.go:89] found id: ""
	I0416 01:03:11.093308   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.093317   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:11.093325   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:11.093386   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:11.131883   62139 cri.go:89] found id: ""
	I0416 01:03:11.131912   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.131924   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:11.131934   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:11.132004   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:11.175142   62139 cri.go:89] found id: ""
	I0416 01:03:11.175169   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.175179   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:11.175186   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:11.175236   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:11.209985   62139 cri.go:89] found id: ""
	I0416 01:03:11.210020   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.210031   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:11.210039   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:11.210110   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:11.246086   62139 cri.go:89] found id: ""
	I0416 01:03:11.246119   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.246129   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:11.246137   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:11.246199   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:11.286979   62139 cri.go:89] found id: ""
	I0416 01:03:11.287007   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.287019   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:11.287037   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:11.287051   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:11.364522   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:11.364557   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:11.410343   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:11.410375   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:11.459671   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:11.459703   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:11.476163   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:11.476193   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:11.549544   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:08.220881   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:10.720607   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:09.329882   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:11.330570   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:08.620817   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:11.120789   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:14.050433   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:14.065375   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:14.065431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:14.105548   62139 cri.go:89] found id: ""
	I0416 01:03:14.105571   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.105579   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:14.105583   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:14.105644   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:14.146891   62139 cri.go:89] found id: ""
	I0416 01:03:14.146915   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.146922   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:14.146927   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:14.146972   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:14.183905   62139 cri.go:89] found id: ""
	I0416 01:03:14.183937   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.183948   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:14.183954   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:14.184002   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:14.219878   62139 cri.go:89] found id: ""
	I0416 01:03:14.219905   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.219915   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:14.219922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:14.219978   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:14.256284   62139 cri.go:89] found id: ""
	I0416 01:03:14.256310   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.256317   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:14.256323   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:14.256381   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:14.295932   62139 cri.go:89] found id: ""
	I0416 01:03:14.295958   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.295966   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:14.295971   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:14.296025   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:14.333202   62139 cri.go:89] found id: ""
	I0416 01:03:14.333226   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.333235   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:14.333242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:14.333302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:14.370034   62139 cri.go:89] found id: ""
	I0416 01:03:14.370059   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.370066   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:14.370074   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:14.370092   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:14.424626   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:14.424669   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:14.441842   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:14.441872   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:14.515899   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:14.515926   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:14.515944   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:14.599956   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:14.599991   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:12.720896   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:15.220260   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:13.829944   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:16.328971   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:13.621084   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:16.120767   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:17.157610   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:17.171737   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:17.171800   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:17.214327   62139 cri.go:89] found id: ""
	I0416 01:03:17.214354   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.214364   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:17.214371   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:17.214433   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:17.255896   62139 cri.go:89] found id: ""
	I0416 01:03:17.255924   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.255939   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:17.255946   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:17.256005   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:17.298470   62139 cri.go:89] found id: ""
	I0416 01:03:17.298498   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.298512   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:17.298520   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:17.298580   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:17.338810   62139 cri.go:89] found id: ""
	I0416 01:03:17.338834   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.338842   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:17.338847   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:17.338899   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:17.375980   62139 cri.go:89] found id: ""
	I0416 01:03:17.376012   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.376019   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:17.376024   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:17.376076   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:17.411374   62139 cri.go:89] found id: ""
	I0416 01:03:17.411400   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.411408   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:17.411413   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:17.411463   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:17.452916   62139 cri.go:89] found id: ""
	I0416 01:03:17.452951   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.452962   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:17.452969   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:17.453037   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:17.492459   62139 cri.go:89] found id: ""
	I0416 01:03:17.492489   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.492500   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:17.492512   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:17.492527   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:17.541780   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:17.541814   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:17.558831   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:17.558867   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:17.635332   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:17.635351   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:17.635362   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:17.715778   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:17.715809   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:20.260621   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:20.274721   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:20.274791   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:20.311965   62139 cri.go:89] found id: ""
	I0416 01:03:20.311991   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.312002   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:20.312009   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:20.312069   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:20.350316   62139 cri.go:89] found id: ""
	I0416 01:03:20.350346   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.350356   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:20.350363   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:20.350414   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:20.404666   62139 cri.go:89] found id: ""
	I0416 01:03:20.404692   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.404700   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:20.404705   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:20.404753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:20.441223   62139 cri.go:89] found id: ""
	I0416 01:03:20.441254   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.441267   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:20.441275   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:20.441340   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:20.480535   62139 cri.go:89] found id: ""
	I0416 01:03:20.480596   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.480606   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:20.480613   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:20.480680   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:20.517520   62139 cri.go:89] found id: ""
	I0416 01:03:20.517543   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.517550   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:20.517556   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:20.517614   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:20.556067   62139 cri.go:89] found id: ""
	I0416 01:03:20.556097   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.556107   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:20.556114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:20.556177   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:20.594901   62139 cri.go:89] found id: ""
	I0416 01:03:20.594932   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.594939   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:20.594947   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:20.594958   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:20.673759   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:20.673795   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:20.721407   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:20.721443   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:20.772957   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:20.772989   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:20.787902   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:20.787932   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:20.863445   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:17.721415   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:20.221042   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:18.329421   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:20.329949   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:22.330009   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:18.122678   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:20.621127   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:22.621692   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:23.363637   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:23.377916   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:23.377991   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:23.415642   62139 cri.go:89] found id: ""
	I0416 01:03:23.415671   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.415679   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:23.415685   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:23.415732   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:23.452788   62139 cri.go:89] found id: ""
	I0416 01:03:23.452812   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.452819   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:23.452829   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:23.452878   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:23.488758   62139 cri.go:89] found id: ""
	I0416 01:03:23.488785   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.488794   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:23.488801   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:23.488862   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:23.526542   62139 cri.go:89] found id: ""
	I0416 01:03:23.526574   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.526584   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:23.526592   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:23.526661   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:23.562481   62139 cri.go:89] found id: ""
	I0416 01:03:23.562505   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.562512   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:23.562518   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:23.562579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:23.599119   62139 cri.go:89] found id: ""
	I0416 01:03:23.599145   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.599155   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:23.599162   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:23.599241   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:23.642445   62139 cri.go:89] found id: ""
	I0416 01:03:23.642474   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.642485   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:23.642492   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:23.642557   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:23.678091   62139 cri.go:89] found id: ""
	I0416 01:03:23.678113   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.678121   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:23.678129   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:23.678140   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:23.731668   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:23.731703   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:23.746413   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:23.746444   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:23.821885   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:23.821908   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:23.821923   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:23.901836   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:23.901872   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:26.444935   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:26.459240   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:26.459308   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:26.499208   62139 cri.go:89] found id: ""
	I0416 01:03:26.499237   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.499249   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:26.499256   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:26.499318   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:26.536220   62139 cri.go:89] found id: ""
	I0416 01:03:26.536258   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.536270   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:26.536277   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:26.536342   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:26.576217   62139 cri.go:89] found id: ""
	I0416 01:03:26.576241   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.576249   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:26.576254   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:26.576314   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:26.612343   62139 cri.go:89] found id: ""
	I0416 01:03:26.612369   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.612378   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:26.612385   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:26.612448   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:26.651323   62139 cri.go:89] found id: ""
	I0416 01:03:26.651353   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.651365   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:26.651384   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:26.651453   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:26.688844   62139 cri.go:89] found id: ""
	I0416 01:03:26.688874   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.688885   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:26.688891   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:26.688969   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:26.724362   62139 cri.go:89] found id: ""
	I0416 01:03:26.724387   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.724395   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:26.724401   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:26.724455   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:26.767766   62139 cri.go:89] found id: ""
	I0416 01:03:26.767795   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.767806   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:26.767816   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:26.767837   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:26.788269   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:26.788297   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:26.884802   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:26.884822   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:26.884834   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:26.964007   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:26.964044   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:27.003719   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:27.003745   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:22.720420   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:24.720865   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:26.721369   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:24.828766   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:26.830222   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:25.119674   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:27.620689   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:29.563218   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:29.579014   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:29.579078   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:29.620739   62139 cri.go:89] found id: ""
	I0416 01:03:29.620769   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.620780   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:29.620787   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:29.620850   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:29.658165   62139 cri.go:89] found id: ""
	I0416 01:03:29.658192   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.658199   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:29.658205   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:29.658252   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:29.693893   62139 cri.go:89] found id: ""
	I0416 01:03:29.693921   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.693929   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:29.693935   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:29.693985   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:29.737808   62139 cri.go:89] found id: ""
	I0416 01:03:29.737836   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.737846   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:29.737851   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:29.737910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:29.777382   62139 cri.go:89] found id: ""
	I0416 01:03:29.777408   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.777416   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:29.777422   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:29.777473   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:29.815633   62139 cri.go:89] found id: ""
	I0416 01:03:29.815659   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.815668   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:29.815682   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:29.815743   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:29.858790   62139 cri.go:89] found id: ""
	I0416 01:03:29.858820   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.858831   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:29.858839   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:29.858899   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:29.897085   62139 cri.go:89] found id: ""
	I0416 01:03:29.897120   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.897131   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:29.897142   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:29.897169   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:29.951231   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:29.951266   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:29.965539   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:29.965565   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:30.045138   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:30.045170   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:30.045186   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:30.120575   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:30.120606   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:29.220073   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:31.221145   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:29.328625   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:31.329903   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:29.621401   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:32.120604   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:32.662210   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:32.675833   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:32.675903   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:32.712104   62139 cri.go:89] found id: ""
	I0416 01:03:32.712129   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.712136   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:32.712141   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:32.712198   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:32.749617   62139 cri.go:89] found id: ""
	I0416 01:03:32.749644   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.749652   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:32.749658   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:32.749723   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:32.785069   62139 cri.go:89] found id: ""
	I0416 01:03:32.785100   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.785110   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:32.785116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:32.785191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:32.825871   62139 cri.go:89] found id: ""
	I0416 01:03:32.825912   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.825922   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:32.825928   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:32.826008   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:32.868294   62139 cri.go:89] found id: ""
	I0416 01:03:32.868321   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.868328   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:32.868334   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:32.868401   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:32.907764   62139 cri.go:89] found id: ""
	I0416 01:03:32.907789   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.907796   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:32.907802   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:32.907870   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:32.946112   62139 cri.go:89] found id: ""
	I0416 01:03:32.946137   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.946144   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:32.946155   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:32.946215   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:32.985343   62139 cri.go:89] found id: ""
	I0416 01:03:32.985374   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.985385   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:32.985395   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:32.985415   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:33.063117   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:33.063154   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:33.113739   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:33.113773   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:33.163466   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:33.163508   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:33.178368   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:33.178397   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:33.259509   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:35.760004   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:35.774161   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:35.774237   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:35.812551   62139 cri.go:89] found id: ""
	I0416 01:03:35.812580   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.812589   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:35.812594   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:35.812642   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:35.853134   62139 cri.go:89] found id: ""
	I0416 01:03:35.853177   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.853187   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:35.853195   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:35.853255   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:35.894210   62139 cri.go:89] found id: ""
	I0416 01:03:35.894246   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.894254   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:35.894259   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:35.894330   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:35.928986   62139 cri.go:89] found id: ""
	I0416 01:03:35.929010   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.929019   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:35.929027   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:35.929090   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:35.970688   62139 cri.go:89] found id: ""
	I0416 01:03:35.970712   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.970719   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:35.970725   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:35.970783   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:36.005744   62139 cri.go:89] found id: ""
	I0416 01:03:36.005771   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.005778   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:36.005783   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:36.005829   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:36.044932   62139 cri.go:89] found id: ""
	I0416 01:03:36.044966   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.044977   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:36.044984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:36.045051   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:36.080488   62139 cri.go:89] found id: ""
	I0416 01:03:36.080516   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.080527   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:36.080538   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:36.080552   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:36.132956   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:36.133000   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:36.147070   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:36.147097   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:36.226640   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:36.226670   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:36.226684   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:36.307205   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:36.307249   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:33.221952   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:35.720745   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:33.828768   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:35.830452   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:34.120695   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:36.619511   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:38.849685   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:38.863817   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:38.863897   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:38.902418   62139 cri.go:89] found id: ""
	I0416 01:03:38.902445   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.902455   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:38.902462   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:38.902533   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:38.937811   62139 cri.go:89] found id: ""
	I0416 01:03:38.937838   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.937845   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:38.937850   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:38.937900   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:38.972380   62139 cri.go:89] found id: ""
	I0416 01:03:38.972403   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.972411   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:38.972416   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:38.972466   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:39.007572   62139 cri.go:89] found id: ""
	I0416 01:03:39.007595   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.007603   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:39.007608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:39.007651   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:39.049355   62139 cri.go:89] found id: ""
	I0416 01:03:39.049382   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.049391   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:39.049398   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:39.049459   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:39.084535   62139 cri.go:89] found id: ""
	I0416 01:03:39.084565   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.084574   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:39.084581   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:39.084645   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:39.125027   62139 cri.go:89] found id: ""
	I0416 01:03:39.125055   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.125073   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:39.125080   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:39.125136   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:39.164506   62139 cri.go:89] found id: ""
	I0416 01:03:39.164537   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.164547   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:39.164557   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:39.164573   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:39.203447   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:39.203483   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:39.259087   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:39.259122   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:39.273611   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:39.273637   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:39.352372   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:39.352392   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:39.352407   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:41.938575   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:41.952937   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:41.953019   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:41.990771   62139 cri.go:89] found id: ""
	I0416 01:03:41.990802   62139 logs.go:276] 0 containers: []
	W0416 01:03:41.990811   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:41.990819   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:41.990881   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:42.027338   62139 cri.go:89] found id: ""
	I0416 01:03:42.027367   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.027374   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:42.027379   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:42.027431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:42.068348   62139 cri.go:89] found id: ""
	I0416 01:03:42.068377   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.068387   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:42.068394   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:42.068457   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:38.220198   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:40.220481   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:42.221383   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:38.330729   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:40.831615   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:38.620021   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:40.620641   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:42.620702   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:42.108157   62139 cri.go:89] found id: ""
	I0416 01:03:42.108181   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.108187   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:42.108193   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:42.108244   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:42.149749   62139 cri.go:89] found id: ""
	I0416 01:03:42.149770   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.149777   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:42.149784   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:42.149848   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:42.185322   62139 cri.go:89] found id: ""
	I0416 01:03:42.185349   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.185360   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:42.185368   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:42.185435   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:42.224334   62139 cri.go:89] found id: ""
	I0416 01:03:42.224359   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.224370   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:42.224376   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:42.224435   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:42.263466   62139 cri.go:89] found id: ""
	I0416 01:03:42.263494   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.263502   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:42.263509   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:42.263522   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:42.315106   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:42.315139   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:42.329394   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:42.329425   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:42.405267   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:42.405305   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:42.405321   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:42.486126   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:42.486168   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:45.027718   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:45.042387   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:45.042453   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:45.080790   62139 cri.go:89] found id: ""
	I0416 01:03:45.080814   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.080823   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:45.080829   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:45.080875   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:45.121278   62139 cri.go:89] found id: ""
	I0416 01:03:45.121306   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.121317   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:45.121324   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:45.121383   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:45.158076   62139 cri.go:89] found id: ""
	I0416 01:03:45.158099   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.158107   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:45.158116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:45.158162   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:45.195577   62139 cri.go:89] found id: ""
	I0416 01:03:45.195608   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.195619   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:45.195627   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:45.195685   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:45.239230   62139 cri.go:89] found id: ""
	I0416 01:03:45.239257   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.239267   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:45.239275   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:45.239326   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:45.279193   62139 cri.go:89] found id: ""
	I0416 01:03:45.279220   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.279227   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:45.279232   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:45.279280   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:45.314876   62139 cri.go:89] found id: ""
	I0416 01:03:45.314908   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.314916   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:45.314922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:45.314970   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:45.351699   62139 cri.go:89] found id: ""
	I0416 01:03:45.351723   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.351730   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:45.351738   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:45.351750   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:45.392681   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:45.392708   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:45.446564   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:45.446605   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:45.460541   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:45.460564   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:45.535287   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:45.535319   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:45.535334   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:44.720088   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:46.721511   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:43.329413   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:45.330644   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:45.123357   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:47.621806   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:48.117476   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:48.133341   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:48.133402   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:48.171230   62139 cri.go:89] found id: ""
	I0416 01:03:48.171263   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.171273   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:48.171280   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:48.171337   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:48.206188   62139 cri.go:89] found id: ""
	I0416 01:03:48.206218   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.206229   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:48.206236   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:48.206294   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:48.242349   62139 cri.go:89] found id: ""
	I0416 01:03:48.242377   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.242384   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:48.242389   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:48.242437   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:48.278324   62139 cri.go:89] found id: ""
	I0416 01:03:48.278347   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.278355   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:48.278360   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:48.278406   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:48.315727   62139 cri.go:89] found id: ""
	I0416 01:03:48.315753   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.315763   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:48.315770   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:48.315828   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:48.354146   62139 cri.go:89] found id: ""
	I0416 01:03:48.354169   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.354176   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:48.354182   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:48.354242   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:48.393951   62139 cri.go:89] found id: ""
	I0416 01:03:48.393989   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.394000   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:48.394007   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:48.394081   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:48.431849   62139 cri.go:89] found id: ""
	I0416 01:03:48.431887   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.431895   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:48.431903   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:48.431917   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:48.446210   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:48.446242   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:48.517459   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:48.517485   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:48.517500   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:48.596320   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:48.596356   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:48.639700   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:48.639733   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:51.197396   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:51.211803   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:51.211889   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:51.250768   62139 cri.go:89] found id: ""
	I0416 01:03:51.250793   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.250802   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:51.250810   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:51.250872   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:51.291389   62139 cri.go:89] found id: ""
	I0416 01:03:51.291415   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.291421   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:51.291429   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:51.291478   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:51.332466   62139 cri.go:89] found id: ""
	I0416 01:03:51.332490   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.332499   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:51.332504   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:51.332549   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:51.367731   62139 cri.go:89] found id: ""
	I0416 01:03:51.367759   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.367767   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:51.367773   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:51.367829   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:51.400567   62139 cri.go:89] found id: ""
	I0416 01:03:51.400599   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.400609   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:51.400616   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:51.400679   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:51.433561   62139 cri.go:89] found id: ""
	I0416 01:03:51.433590   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.433598   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:51.433608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:51.433666   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:51.469136   62139 cri.go:89] found id: ""
	I0416 01:03:51.469179   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.469189   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:51.469196   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:51.469255   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:51.504410   62139 cri.go:89] found id: ""
	I0416 01:03:51.504442   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.504452   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:51.504462   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:51.504480   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:51.557420   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:51.557449   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:51.571481   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:51.571506   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:51.648722   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:51.648744   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:51.648755   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:51.728945   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:51.728978   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:49.221614   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:51.721798   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:47.829985   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:50.329419   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:52.329909   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:49.622776   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:52.120080   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:54.272503   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:54.286573   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:54.286646   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:54.321084   62139 cri.go:89] found id: ""
	I0416 01:03:54.321115   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.321125   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:54.321133   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:54.321208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:54.366333   62139 cri.go:89] found id: ""
	I0416 01:03:54.366364   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.366374   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:54.366380   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:54.366437   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:54.406267   62139 cri.go:89] found id: ""
	I0416 01:03:54.406317   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.406328   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:54.406336   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:54.406405   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:54.446853   62139 cri.go:89] found id: ""
	I0416 01:03:54.446883   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.446894   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:54.446901   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:54.446956   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:54.487658   62139 cri.go:89] found id: ""
	I0416 01:03:54.487683   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.487690   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:54.487696   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:54.487753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:54.530189   62139 cri.go:89] found id: ""
	I0416 01:03:54.530216   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.530226   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:54.530232   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:54.530289   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:54.571317   62139 cri.go:89] found id: ""
	I0416 01:03:54.571341   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.571349   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:54.571354   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:54.571416   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:54.612432   62139 cri.go:89] found id: ""
	I0416 01:03:54.612458   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.612467   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:54.612478   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:54.612493   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:54.666599   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:54.666629   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:54.680880   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:54.680915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:54.757365   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:54.757386   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:54.757398   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:54.834436   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:54.834468   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:54.219690   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:56.220753   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:54.332950   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:56.830167   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:54.621002   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:56.622452   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:57.405516   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:57.420694   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:57.420773   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:57.460338   62139 cri.go:89] found id: ""
	I0416 01:03:57.460367   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.460374   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:57.460381   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:57.460442   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:57.498121   62139 cri.go:89] found id: ""
	I0416 01:03:57.498150   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.498160   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:57.498167   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:57.498228   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:57.536959   62139 cri.go:89] found id: ""
	I0416 01:03:57.536989   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.537005   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:57.537014   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:57.537077   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:57.575633   62139 cri.go:89] found id: ""
	I0416 01:03:57.575662   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.575673   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:57.575680   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:57.575743   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:57.614459   62139 cri.go:89] found id: ""
	I0416 01:03:57.614491   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.614501   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:57.614509   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:57.614568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:57.657078   62139 cri.go:89] found id: ""
	I0416 01:03:57.657109   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.657120   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:57.657127   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:57.657204   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:57.693882   62139 cri.go:89] found id: ""
	I0416 01:03:57.693904   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.693911   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:57.693922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:57.693969   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:57.731283   62139 cri.go:89] found id: ""
	I0416 01:03:57.731312   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.731320   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:57.731327   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:57.731338   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:57.782618   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:57.782656   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:57.796763   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:57.796794   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:57.869629   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:57.869652   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:57.869665   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:57.948859   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:57.948892   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:00.487682   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:00.501095   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:00.501182   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:00.537902   62139 cri.go:89] found id: ""
	I0416 01:04:00.537931   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.537939   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:00.537945   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:00.537994   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:00.574164   62139 cri.go:89] found id: ""
	I0416 01:04:00.574203   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.574214   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:00.574222   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:00.574287   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:00.629592   62139 cri.go:89] found id: ""
	I0416 01:04:00.629615   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.629622   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:00.629627   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:00.629679   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:00.672102   62139 cri.go:89] found id: ""
	I0416 01:04:00.672127   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.672134   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:00.672141   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:00.672201   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:00.715040   62139 cri.go:89] found id: ""
	I0416 01:04:00.715064   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.715072   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:00.715078   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:00.715139   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:00.751113   62139 cri.go:89] found id: ""
	I0416 01:04:00.751137   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.751146   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:00.751152   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:00.751204   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:00.787613   62139 cri.go:89] found id: ""
	I0416 01:04:00.787644   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.787653   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:00.787660   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:00.787721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:00.824244   62139 cri.go:89] found id: ""
	I0416 01:04:00.824271   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.824280   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:00.824291   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:00.824304   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:00.899977   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:00.900014   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:00.900029   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:00.982317   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:00.982350   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:01.026354   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:01.026393   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:01.080393   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:01.080441   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:58.720894   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:00.720961   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:59.329460   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:01.330171   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:59.119259   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:01.619026   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:03.595966   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:03.609190   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:03.609253   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:03.647151   62139 cri.go:89] found id: ""
	I0416 01:04:03.647183   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.647197   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:03.647203   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:03.647250   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:03.685211   62139 cri.go:89] found id: ""
	I0416 01:04:03.685239   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.685248   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:03.685254   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:03.685303   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:03.720928   62139 cri.go:89] found id: ""
	I0416 01:04:03.720949   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.720956   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:03.720961   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:03.721035   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:03.759179   62139 cri.go:89] found id: ""
	I0416 01:04:03.759210   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.759220   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:03.759228   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:03.759290   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:03.795670   62139 cri.go:89] found id: ""
	I0416 01:04:03.795700   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.795710   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:03.795717   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:03.795785   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:03.832944   62139 cri.go:89] found id: ""
	I0416 01:04:03.832971   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.832980   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:03.832988   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:03.833053   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:03.869211   62139 cri.go:89] found id: ""
	I0416 01:04:03.869238   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.869248   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:03.869256   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:03.869317   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:03.905859   62139 cri.go:89] found id: ""
	I0416 01:04:03.905888   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.905896   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:03.905904   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:03.905915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:03.957057   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:03.957088   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:03.972309   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:03.972344   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:04.049927   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:04.049950   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:04.049965   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:04.136395   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:04.136435   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:06.676667   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:06.690062   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:06.690125   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:06.733734   62139 cri.go:89] found id: ""
	I0416 01:04:06.733758   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.733773   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:06.733782   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:06.733835   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:06.773112   62139 cri.go:89] found id: ""
	I0416 01:04:06.773140   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.773147   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:06.773152   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:06.773231   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:06.812786   62139 cri.go:89] found id: ""
	I0416 01:04:06.812809   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.812817   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:06.812822   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:06.812870   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:06.853995   62139 cri.go:89] found id: ""
	I0416 01:04:06.854022   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.854029   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:06.854034   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:06.854088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:06.893809   62139 cri.go:89] found id: ""
	I0416 01:04:06.893841   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.893848   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:06.893853   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:06.893909   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:06.929389   62139 cri.go:89] found id: ""
	I0416 01:04:06.929419   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.929430   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:06.929437   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:06.929518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:06.968278   62139 cri.go:89] found id: ""
	I0416 01:04:06.968303   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.968311   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:06.968316   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:06.968364   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:07.018932   62139 cri.go:89] found id: ""
	I0416 01:04:07.018965   62139 logs.go:276] 0 containers: []
	W0416 01:04:07.018976   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:07.018989   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:07.019003   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:07.083611   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:07.083645   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:03.220314   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:05.720941   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:03.830050   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:06.329416   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:03.619482   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:05.620393   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:07.110126   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:07.110152   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:07.186262   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:07.186290   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:07.186305   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:07.263139   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:07.263170   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:09.807489   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:09.822045   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:09.822110   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:09.867444   62139 cri.go:89] found id: ""
	I0416 01:04:09.867469   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.867480   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:09.867487   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:09.867538   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:09.904280   62139 cri.go:89] found id: ""
	I0416 01:04:09.904312   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.904323   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:09.904330   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:09.904389   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:09.941066   62139 cri.go:89] found id: ""
	I0416 01:04:09.941091   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.941099   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:09.941107   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:09.941189   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:09.975739   62139 cri.go:89] found id: ""
	I0416 01:04:09.975767   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.975777   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:09.975785   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:09.975844   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:10.011414   62139 cri.go:89] found id: ""
	I0416 01:04:10.011444   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.011454   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:10.011461   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:10.011528   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:10.045670   62139 cri.go:89] found id: ""
	I0416 01:04:10.045695   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.045704   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:10.045711   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:10.045777   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:10.082320   62139 cri.go:89] found id: ""
	I0416 01:04:10.082352   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.082361   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:10.082368   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:10.082428   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:10.120453   62139 cri.go:89] found id: ""
	I0416 01:04:10.120482   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.120492   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:10.120501   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:10.120515   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:10.200213   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:10.200251   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:10.251709   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:10.251742   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:10.307348   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:10.307382   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:10.321293   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:10.321319   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:10.401361   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:08.220488   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:10.221408   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:08.331985   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:10.829244   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:08.119800   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:10.121093   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:12.126420   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:12.901763   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:12.916308   62139 kubeadm.go:591] duration metric: took 4m4.703830076s to restartPrimaryControlPlane
	W0416 01:04:12.916384   62139 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:04:12.916416   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:04:12.720462   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:14.721516   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:17.220364   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:12.830409   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:15.330184   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:14.620714   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:16.622203   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:17.897436   62139 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.980993606s)
	I0416 01:04:17.897592   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:04:17.914655   62139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:04:17.927482   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:04:17.940210   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:04:17.940233   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:04:17.940274   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:04:17.951037   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:04:17.951106   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:04:17.962341   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:04:17.972436   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:04:17.972500   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:04:17.983198   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:04:17.992856   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:04:17.992912   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:04:18.003122   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:04:18.014064   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:04:18.014117   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:04:18.024854   62139 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:04:18.101381   62139 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 01:04:18.101436   62139 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:04:18.246529   62139 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:04:18.246687   62139 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:04:18.246802   62139 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:04:18.456847   62139 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:04:18.458980   62139 out.go:204]   - Generating certificates and keys ...
	I0416 01:04:18.459096   62139 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:04:18.459190   62139 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:04:18.459294   62139 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:04:18.459381   62139 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:04:18.459473   62139 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:04:18.459548   62139 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:04:18.459631   62139 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:04:18.459721   62139 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:04:18.459822   62139 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:04:18.460281   62139 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:04:18.460387   62139 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:04:18.460475   62139 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:04:18.564910   62139 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:04:18.806406   62139 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:04:18.890124   62139 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:04:19.046415   62139 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:04:19.063159   62139 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:04:19.063301   62139 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:04:19.063415   62139 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:04:19.229066   62139 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:04:19.231110   62139 out.go:204]   - Booting up control plane ...
	I0416 01:04:19.231246   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:04:19.248833   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:04:19.250340   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:04:19.251664   62139 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:04:19.254678   62139 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:04:19.221976   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:21.720239   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:17.830011   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:18.323271   61500 pod_ready.go:81] duration metric: took 4m0.000449424s for pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace to be "Ready" ...
	E0416 01:04:18.323300   61500 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0416 01:04:18.323318   61500 pod_ready.go:38] duration metric: took 4m9.009725319s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:04:18.323357   61500 kubeadm.go:591] duration metric: took 4m19.656264138s to restartPrimaryControlPlane
	W0416 01:04:18.323420   61500 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:04:18.323449   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:04:19.122802   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:21.621389   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:24.227649   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:26.720896   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:24.119577   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:26.620166   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:29.219937   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:31.220697   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:28.622399   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:31.119279   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:33.221240   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:35.221536   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:33.124909   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:35.620718   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:37.720528   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:40.220531   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:38.120415   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:40.121126   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:42.620161   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:42.719946   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:44.720203   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:47.219782   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:44.620806   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:47.119479   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:47.613243   62747 pod_ready.go:81] duration metric: took 4m0.000098534s for pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace to be "Ready" ...
	E0416 01:04:47.613279   62747 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0416 01:04:47.613297   62747 pod_ready.go:38] duration metric: took 4m12.544704519s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:04:47.613327   62747 kubeadm.go:591] duration metric: took 4m20.76891948s to restartPrimaryControlPlane
	W0416 01:04:47.613387   62747 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:04:47.613410   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:04:50.224993   61500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.901526458s)
	I0416 01:04:50.225057   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:04:50.241083   61500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:04:50.252468   61500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:04:50.263721   61500 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:04:50.263744   61500 kubeadm.go:156] found existing configuration files:
	
	I0416 01:04:50.263786   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:04:50.274550   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:04:50.274620   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:04:50.285019   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:04:50.295079   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:04:50.295151   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:04:50.306424   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:04:50.317221   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:04:50.317286   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:04:50.327783   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:04:50.338144   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:04:50.338213   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:04:50.349262   61500 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:04:50.410467   61500 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.2
	I0416 01:04:50.410597   61500 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:04:50.565288   61500 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:04:50.565442   61500 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:04:50.565580   61500 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:04:50.783173   61500 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:04:50.785219   61500 out.go:204]   - Generating certificates and keys ...
	I0416 01:04:50.785339   61500 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:04:50.785427   61500 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:04:50.785526   61500 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:04:50.785620   61500 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:04:50.785745   61500 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:04:50.785847   61500 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:04:50.785951   61500 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:04:50.786037   61500 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:04:50.786156   61500 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:04:50.786279   61500 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:04:50.786341   61500 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:04:50.786425   61500 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:04:50.868738   61500 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:04:51.024628   61500 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 01:04:51.304801   61500 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:04:51.485803   61500 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:04:51.614330   61500 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:04:51.615043   61500 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:04:51.617465   61500 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:04:49.720594   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:51.721464   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:51.619398   61500 out.go:204]   - Booting up control plane ...
	I0416 01:04:51.619519   61500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:04:51.619637   61500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:04:51.619717   61500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:04:51.640756   61500 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:04:51.643264   61500 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:04:51.643617   61500 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:04:51.796506   61500 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0416 01:04:51.796640   61500 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0416 01:04:54.220965   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:56.222571   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:52.798698   61500 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002359416s
	I0416 01:04:52.798798   61500 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0416 01:04:57.802689   61500 kubeadm.go:309] [api-check] The API server is healthy after 5.003967397s
	I0416 01:04:57.816580   61500 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 01:04:57.840465   61500 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 01:04:57.879611   61500 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 01:04:57.879906   61500 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-572602 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 01:04:57.895211   61500 kubeadm.go:309] [bootstrap-token] Using token: w1qt2t.vu77oqcsegb1grvk
	I0416 01:04:57.896829   61500 out.go:204]   - Configuring RBAC rules ...
	I0416 01:04:57.896958   61500 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 01:04:57.905289   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 01:04:57.916967   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 01:04:57.922660   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 01:04:57.926143   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 01:04:57.935222   61500 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 01:04:58.215180   61500 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 01:04:58.656120   61500 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 01:04:59.209811   61500 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 01:04:59.211274   61500 kubeadm.go:309] 
	I0416 01:04:59.211354   61500 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 01:04:59.211390   61500 kubeadm.go:309] 
	I0416 01:04:59.211489   61500 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 01:04:59.211512   61500 kubeadm.go:309] 
	I0416 01:04:59.211556   61500 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 01:04:59.211626   61500 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 01:04:59.211695   61500 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 01:04:59.211707   61500 kubeadm.go:309] 
	I0416 01:04:59.211779   61500 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 01:04:59.211789   61500 kubeadm.go:309] 
	I0416 01:04:59.211853   61500 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 01:04:59.211921   61500 kubeadm.go:309] 
	I0416 01:04:59.212030   61500 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 01:04:59.212165   61500 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 01:04:59.212269   61500 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 01:04:59.212280   61500 kubeadm.go:309] 
	I0416 01:04:59.212407   61500 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 01:04:59.212516   61500 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 01:04:59.212525   61500 kubeadm.go:309] 
	I0416 01:04:59.212656   61500 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token w1qt2t.vu77oqcsegb1grvk \
	I0416 01:04:59.212835   61500 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0416 01:04:59.212880   61500 kubeadm.go:309] 	--control-plane 
	I0416 01:04:59.212894   61500 kubeadm.go:309] 
	I0416 01:04:59.212996   61500 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 01:04:59.213007   61500 kubeadm.go:309] 
	I0416 01:04:59.213111   61500 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token w1qt2t.vu77oqcsegb1grvk \
	I0416 01:04:59.213278   61500 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0416 01:04:59.213435   61500 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:04:59.213460   61500 cni.go:84] Creating CNI manager for ""
	I0416 01:04:59.213477   61500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:04:59.215397   61500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:04:59.255478   62139 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 01:04:59.256524   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:04:59.256807   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:04:58.720339   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:01.220968   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:59.216764   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:04:59.230134   61500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:04:59.250739   61500 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 01:04:59.250773   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:04:59.250775   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-572602 minikube.k8s.io/updated_at=2024_04_16T01_04_59_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=no-preload-572602 minikube.k8s.io/primary=true
	I0416 01:04:59.462907   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:04:59.462915   61500 ops.go:34] apiserver oom_adj: -16
	I0416 01:04:59.962977   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:00.463142   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:00.963871   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:01.463866   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:01.963356   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:02.463729   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:04.257472   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:04.257756   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:03.720762   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:05.721421   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:02.963816   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:03.463370   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:03.963655   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:04.463681   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:04.963387   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:05.462926   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:05.963659   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:06.463091   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:06.963504   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:07.463783   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:07.963037   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:08.463212   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:08.963443   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:09.463179   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:09.963188   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:10.463264   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:10.963863   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:11.463051   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:11.591367   61500 kubeadm.go:1107] duration metric: took 12.340665724s to wait for elevateKubeSystemPrivileges
	W0416 01:05:11.591410   61500 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:05:11.591425   61500 kubeadm.go:393] duration metric: took 5m12.980123227s to StartCluster
	I0416 01:05:11.591451   61500 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:11.591559   61500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:05:11.593498   61500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:11.593838   61500 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:05:11.595572   61500 out.go:177] * Verifying Kubernetes components...
	I0416 01:05:11.593961   61500 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:05:11.594060   61500 config.go:182] Loaded profile config "no-preload-572602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 01:05:11.597038   61500 addons.go:69] Setting default-storageclass=true in profile "no-preload-572602"
	I0416 01:05:11.597047   61500 addons.go:69] Setting metrics-server=true in profile "no-preload-572602"
	I0416 01:05:11.597077   61500 addons.go:234] Setting addon metrics-server=true in "no-preload-572602"
	I0416 01:05:11.597081   61500 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-572602"
	W0416 01:05:11.597084   61500 addons.go:243] addon metrics-server should already be in state true
	I0416 01:05:11.597168   61500 host.go:66] Checking if "no-preload-572602" exists ...
	I0416 01:05:11.597042   61500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:05:11.597038   61500 addons.go:69] Setting storage-provisioner=true in profile "no-preload-572602"
	I0416 01:05:11.597274   61500 addons.go:234] Setting addon storage-provisioner=true in "no-preload-572602"
	W0416 01:05:11.597281   61500 addons.go:243] addon storage-provisioner should already be in state true
	I0416 01:05:11.597300   61500 host.go:66] Checking if "no-preload-572602" exists ...
	I0416 01:05:11.597516   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.597563   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.597590   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.597621   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.597621   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.597684   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.617344   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
	I0416 01:05:11.617833   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46345
	I0416 01:05:11.617853   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.618040   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I0416 01:05:11.618170   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.618385   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.618539   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.618564   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.618682   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.618708   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.618786   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.618806   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.619020   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.619035   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.619145   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.619371   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.619629   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.619663   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.619683   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.619715   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.622758   61500 addons.go:234] Setting addon default-storageclass=true in "no-preload-572602"
	W0416 01:05:11.622784   61500 addons.go:243] addon default-storageclass should already be in state true
	I0416 01:05:11.622814   61500 host.go:66] Checking if "no-preload-572602" exists ...
	I0416 01:05:11.623148   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.623182   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.640851   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44015
	I0416 01:05:11.641427   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.642008   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.642028   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.642429   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.642635   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.643204   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I0416 01:05:11.643239   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38953
	I0416 01:05:11.643578   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.643673   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.644133   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.644150   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.644398   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.644409   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.644508   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.644786   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.644823   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.645630   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 01:05:11.645797   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.645824   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.648522   61500 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 01:05:11.646649   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 01:05:11.650173   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 01:05:11.650185   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 01:05:11.650206   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 01:05:11.652524   61500 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:05:07.721798   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:08.214615   61267 pod_ready.go:81] duration metric: took 4m0.001005317s for pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace to be "Ready" ...
	E0416 01:05:08.214650   61267 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0416 01:05:08.214688   61267 pod_ready.go:38] duration metric: took 4m14.521894608s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:08.214750   61267 kubeadm.go:591] duration metric: took 4m22.563492336s to restartPrimaryControlPlane
	W0416 01:05:08.214821   61267 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:05:08.214857   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:05:11.654173   61500 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:11.654189   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:05:11.654207   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 01:05:11.654021   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.654488   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 01:05:11.654524   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.654823   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 01:05:11.655016   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 01:05:11.655159   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 01:05:11.655331   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 01:05:11.657706   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.658193   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 01:05:11.658214   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.658388   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 01:05:11.658585   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 01:05:11.658761   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 01:05:11.658937   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 01:05:11.669485   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34717
	I0416 01:05:11.669878   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.670340   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.670352   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.670714   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.670887   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.672571   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 01:05:11.672888   61500 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:11.672900   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:05:11.672912   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 01:05:11.675816   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.676163   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 01:05:11.676182   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.676335   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 01:05:11.676513   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 01:05:11.676657   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 01:05:11.676799   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 01:05:11.822229   61500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:05:11.850495   61500 node_ready.go:35] waiting up to 6m0s for node "no-preload-572602" to be "Ready" ...
	I0416 01:05:11.868828   61500 node_ready.go:49] node "no-preload-572602" has status "Ready":"True"
	I0416 01:05:11.868852   61500 node_ready.go:38] duration metric: took 18.327813ms for node "no-preload-572602" to be "Ready" ...
	I0416 01:05:11.868860   61500 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:11.877018   61500 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.884190   61500 pod_ready.go:92] pod "etcd-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:11.884221   61500 pod_ready.go:81] duration metric: took 7.173699ms for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.884234   61500 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.901639   61500 pod_ready.go:92] pod "kube-apiserver-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:11.901672   61500 pod_ready.go:81] duration metric: took 17.430111ms for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.901684   61500 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.911839   61500 pod_ready.go:92] pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:11.911871   61500 pod_ready.go:81] duration metric: took 10.178219ms for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.911885   61500 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.936265   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 01:05:11.936293   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 01:05:11.939406   61500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:11.942233   61500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:11.963094   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 01:05:11.963123   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 01:05:12.027316   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:12.027341   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 01:05:12.150413   61500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:12.387284   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.387310   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.387640   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.387665   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.387674   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.387682   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.387973   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.387991   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.395148   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.395179   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.395459   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:12.395488   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.395508   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.930331   61500 pod_ready.go:92] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:12.930362   61500 pod_ready.go:81] duration metric: took 1.01846846s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:12.930373   61500 pod_ready.go:38] duration metric: took 1.061502471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:12.930390   61500 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:05:12.930454   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:05:12.990840   61500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.048571147s)
	I0416 01:05:12.990905   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.990919   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.991246   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:12.991309   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.991323   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.991380   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.991391   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.991617   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:12.991669   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.991690   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:13.719959   61500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.569495387s)
	I0416 01:05:13.720018   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:13.720023   61500 api_server.go:72] duration metric: took 2.12614679s to wait for apiserver process to appear ...
	I0416 01:05:13.720046   61500 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:05:13.720066   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:05:13.720034   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:13.720435   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:13.720458   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:13.720469   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:13.720472   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:13.720477   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:13.720670   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:13.720681   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:13.720691   61500 addons.go:470] Verifying addon metrics-server=true in "no-preload-572602"
	I0416 01:05:13.722348   61500 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0416 01:05:13.723686   61500 addons.go:505] duration metric: took 2.129734353s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0416 01:05:13.764481   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 200:
	ok
	I0416 01:05:13.771661   61500 api_server.go:141] control plane version: v1.30.0-rc.2
	I0416 01:05:13.771690   61500 api_server.go:131] duration metric: took 51.637739ms to wait for apiserver health ...
	I0416 01:05:13.771698   61500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:05:13.812701   61500 system_pods.go:59] 9 kube-system pods found
	I0416 01:05:13.812744   61500 system_pods.go:61] "coredns-7db6d8ff4d-2b5ht" [b8d48a4c-6efd-409a-98be-3ec5bf639470] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.812753   61500 system_pods.go:61] "coredns-7db6d8ff4d-p62sn" [36768eb2-2a22-48e1-b271-f262aa64e014] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.812761   61500 system_pods.go:61] "etcd-no-preload-572602" [c9ed4f86-07f3-48d6-948c-8c4243920512] Running
	I0416 01:05:13.812765   61500 system_pods.go:61] "kube-apiserver-no-preload-572602" [a92513a3-4129-41a2-a603-4a69f4e72041] Running
	I0416 01:05:13.812768   61500 system_pods.go:61] "kube-controller-manager-no-preload-572602" [ce013e5b-5d3c-42de-8a00-c7041288740b] Running
	I0416 01:05:13.812774   61500 system_pods.go:61] "kube-proxy-6cjlc" [2c4d9303-8c08-4385-a6b9-63dda0d9a274] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0416 01:05:13.812777   61500 system_pods.go:61] "kube-scheduler-no-preload-572602" [a9f71ca2-f211-4e6d-9940-4e0af5d4287e] Running
	I0416 01:05:13.812783   61500 system_pods.go:61] "metrics-server-569cc877fc-5j5rc" [3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:13.812792   61500 system_pods.go:61] "storage-provisioner" [b9ac9c93-0e50-4598-a9c4-a12e4ff14063] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:13.812802   61500 system_pods.go:74] duration metric: took 41.098881ms to wait for pod list to return data ...
	I0416 01:05:13.812811   61500 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:05:13.847288   61500 default_sa.go:45] found service account: "default"
	I0416 01:05:13.847323   61500 default_sa.go:55] duration metric: took 34.500938ms for default service account to be created ...
	I0416 01:05:13.847335   61500 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:05:13.877107   61500 system_pods.go:86] 9 kube-system pods found
	I0416 01:05:13.877150   61500 system_pods.go:89] "coredns-7db6d8ff4d-2b5ht" [b8d48a4c-6efd-409a-98be-3ec5bf639470] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.877175   61500 system_pods.go:89] "coredns-7db6d8ff4d-p62sn" [36768eb2-2a22-48e1-b271-f262aa64e014] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.877185   61500 system_pods.go:89] "etcd-no-preload-572602" [c9ed4f86-07f3-48d6-948c-8c4243920512] Running
	I0416 01:05:13.877194   61500 system_pods.go:89] "kube-apiserver-no-preload-572602" [a92513a3-4129-41a2-a603-4a69f4e72041] Running
	I0416 01:05:13.877200   61500 system_pods.go:89] "kube-controller-manager-no-preload-572602" [ce013e5b-5d3c-42de-8a00-c7041288740b] Running
	I0416 01:05:13.877209   61500 system_pods.go:89] "kube-proxy-6cjlc" [2c4d9303-8c08-4385-a6b9-63dda0d9a274] Running
	I0416 01:05:13.877215   61500 system_pods.go:89] "kube-scheduler-no-preload-572602" [a9f71ca2-f211-4e6d-9940-4e0af5d4287e] Running
	I0416 01:05:13.877224   61500 system_pods.go:89] "metrics-server-569cc877fc-5j5rc" [3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:13.877237   61500 system_pods.go:89] "storage-provisioner" [b9ac9c93-0e50-4598-a9c4-a12e4ff14063] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:13.877257   61500 retry.go:31] will retry after 239.706522ms: missing components: kube-dns
	I0416 01:05:14.128770   61500 system_pods.go:86] 9 kube-system pods found
	I0416 01:05:14.128814   61500 system_pods.go:89] "coredns-7db6d8ff4d-2b5ht" [b8d48a4c-6efd-409a-98be-3ec5bf639470] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:14.128827   61500 system_pods.go:89] "coredns-7db6d8ff4d-p62sn" [36768eb2-2a22-48e1-b271-f262aa64e014] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:14.128836   61500 system_pods.go:89] "etcd-no-preload-572602" [c9ed4f86-07f3-48d6-948c-8c4243920512] Running
	I0416 01:05:14.128850   61500 system_pods.go:89] "kube-apiserver-no-preload-572602" [a92513a3-4129-41a2-a603-4a69f4e72041] Running
	I0416 01:05:14.128857   61500 system_pods.go:89] "kube-controller-manager-no-preload-572602" [ce013e5b-5d3c-42de-8a00-c7041288740b] Running
	I0416 01:05:14.128864   61500 system_pods.go:89] "kube-proxy-6cjlc" [2c4d9303-8c08-4385-a6b9-63dda0d9a274] Running
	I0416 01:05:14.128871   61500 system_pods.go:89] "kube-scheduler-no-preload-572602" [a9f71ca2-f211-4e6d-9940-4e0af5d4287e] Running
	I0416 01:05:14.128885   61500 system_pods.go:89] "metrics-server-569cc877fc-5j5rc" [3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:14.128893   61500 system_pods.go:89] "storage-provisioner" [b9ac9c93-0e50-4598-a9c4-a12e4ff14063] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:14.128903   61500 system_pods.go:126] duration metric: took 281.561287ms to wait for k8s-apps to be running ...
	I0416 01:05:14.128912   61500 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:05:14.128978   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:14.145557   61500 system_svc.go:56] duration metric: took 16.639555ms WaitForService to wait for kubelet
	I0416 01:05:14.145582   61500 kubeadm.go:576] duration metric: took 2.551711031s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:05:14.145605   61500 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:05:14.149984   61500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:05:14.150009   61500 node_conditions.go:123] node cpu capacity is 2
	I0416 01:05:14.150021   61500 node_conditions.go:105] duration metric: took 4.410684ms to run NodePressure ...
	I0416 01:05:14.150034   61500 start.go:240] waiting for startup goroutines ...
	I0416 01:05:14.150044   61500 start.go:245] waiting for cluster config update ...
	I0416 01:05:14.150064   61500 start.go:254] writing updated cluster config ...
	I0416 01:05:14.150354   61500 ssh_runner.go:195] Run: rm -f paused
	I0416 01:05:14.198605   61500 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.2 (minor skew: 1)
	I0416 01:05:14.200584   61500 out.go:177] * Done! kubectl is now configured to use "no-preload-572602" cluster and "default" namespace by default
	I0416 01:05:14.258629   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:14.258807   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:19.748784   62747 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.135339447s)
	I0416 01:05:19.748866   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:19.766280   62747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:05:19.777541   62747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:05:19.788086   62747 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:05:19.788112   62747 kubeadm.go:156] found existing configuration files:
	
	I0416 01:05:19.788154   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:05:19.798135   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:05:19.798211   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:05:19.809231   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:05:19.819447   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:05:19.819519   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:05:19.830223   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:05:19.840460   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:05:19.840528   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:05:19.851506   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:05:19.861422   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:05:19.861481   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:05:19.871239   62747 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:05:20.089849   62747 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:05:29.079351   62747 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 01:05:29.079435   62747 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:05:29.079534   62747 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:05:29.079679   62747 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:05:29.079817   62747 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:05:29.079934   62747 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:05:29.081701   62747 out.go:204]   - Generating certificates and keys ...
	I0416 01:05:29.081801   62747 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:05:29.081922   62747 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:05:29.082035   62747 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:05:29.082125   62747 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:05:29.082300   62747 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:05:29.082404   62747 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:05:29.082504   62747 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:05:29.082556   62747 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:05:29.082621   62747 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:05:29.082737   62747 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:05:29.082798   62747 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:05:29.082867   62747 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:05:29.082955   62747 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:05:29.083042   62747 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 01:05:29.083129   62747 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:05:29.083209   62747 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:05:29.083278   62747 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:05:29.083385   62747 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:05:29.083467   62747 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:05:29.085050   62747 out.go:204]   - Booting up control plane ...
	I0416 01:05:29.085178   62747 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:05:29.085289   62747 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:05:29.085374   62747 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:05:29.085499   62747 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:05:29.085610   62747 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:05:29.085671   62747 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:05:29.085942   62747 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:05:29.086066   62747 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003717 seconds
	I0416 01:05:29.086227   62747 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 01:05:29.086384   62747 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 01:05:29.086474   62747 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 01:05:29.086755   62747 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-617092 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 01:05:29.086843   62747 kubeadm.go:309] [bootstrap-token] Using token: 33ihar.pt6l329bwmm6yhnr
	I0416 01:05:29.088273   62747 out.go:204]   - Configuring RBAC rules ...
	I0416 01:05:29.088408   62747 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 01:05:29.088516   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 01:05:29.088712   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 01:05:29.088898   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 01:05:29.089046   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 01:05:29.089196   62747 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 01:05:29.089346   62747 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 01:05:29.089413   62747 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 01:05:29.089486   62747 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 01:05:29.089496   62747 kubeadm.go:309] 
	I0416 01:05:29.089581   62747 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 01:05:29.089591   62747 kubeadm.go:309] 
	I0416 01:05:29.089707   62747 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 01:05:29.089719   62747 kubeadm.go:309] 
	I0416 01:05:29.089768   62747 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 01:05:29.089855   62747 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 01:05:29.089932   62747 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 01:05:29.089942   62747 kubeadm.go:309] 
	I0416 01:05:29.090020   62747 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 01:05:29.090041   62747 kubeadm.go:309] 
	I0416 01:05:29.090111   62747 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 01:05:29.090120   62747 kubeadm.go:309] 
	I0416 01:05:29.090193   62747 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 01:05:29.090350   62747 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 01:05:29.090434   62747 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 01:05:29.090445   62747 kubeadm.go:309] 
	I0416 01:05:29.090560   62747 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 01:05:29.090661   62747 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 01:05:29.090667   62747 kubeadm.go:309] 
	I0416 01:05:29.090773   62747 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 33ihar.pt6l329bwmm6yhnr \
	I0416 01:05:29.090921   62747 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0416 01:05:29.090942   62747 kubeadm.go:309] 	--control-plane 
	I0416 01:05:29.090948   62747 kubeadm.go:309] 
	I0416 01:05:29.091017   62747 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 01:05:29.091034   62747 kubeadm.go:309] 
	I0416 01:05:29.091153   62747 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 33ihar.pt6l329bwmm6yhnr \
	I0416 01:05:29.091299   62747 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0416 01:05:29.091313   62747 cni.go:84] Creating CNI manager for ""
	I0416 01:05:29.091323   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:05:29.094154   62747 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:05:29.095747   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:05:29.153706   62747 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:05:29.195477   62747 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 01:05:29.195540   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:29.195540   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-617092 minikube.k8s.io/updated_at=2024_04_16T01_05_29_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=embed-certs-617092 minikube.k8s.io/primary=true
	I0416 01:05:29.551888   62747 ops.go:34] apiserver oom_adj: -16
	I0416 01:05:29.552023   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:30.053117   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:30.552298   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:31.052317   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:31.553057   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:32.052852   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:32.552921   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:34.259492   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:34.259704   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:33.052747   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:33.552301   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:34.052922   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:34.552338   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:35.052106   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:35.552911   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:36.052814   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:36.552077   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:37.052666   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:37.552057   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:38.053198   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:38.552163   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:39.052589   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:39.552701   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:40.053069   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:40.552436   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:41.053071   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:41.158552   62747 kubeadm.go:1107] duration metric: took 11.963074905s to wait for elevateKubeSystemPrivileges
	W0416 01:05:41.158601   62747 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:05:41.158611   62747 kubeadm.go:393] duration metric: took 5m14.369080866s to StartCluster
	I0416 01:05:41.158638   62747 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:41.158736   62747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:05:41.160903   62747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:41.161229   62747 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:05:41.163312   62747 out.go:177] * Verifying Kubernetes components...
	I0416 01:05:40.562916   61267 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.348033752s)
	I0416 01:05:40.562991   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:40.580700   61267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:05:40.592069   61267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:05:40.606450   61267 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:05:40.606477   61267 kubeadm.go:156] found existing configuration files:
	
	I0416 01:05:40.606531   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0416 01:05:40.617547   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:05:40.617622   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:05:40.631465   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0416 01:05:40.644464   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:05:40.644553   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:05:40.655929   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0416 01:05:40.664995   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:05:40.665059   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:05:40.674477   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0416 01:05:40.683500   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:05:40.683570   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:05:40.693774   61267 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:05:40.753612   61267 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 01:05:40.753717   61267 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:05:40.911483   61267 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:05:40.911609   61267 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:05:40.911748   61267 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:05:41.170137   61267 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:05:41.161331   62747 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:05:41.161434   62747 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:05:41.165023   62747 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-617092"
	I0416 01:05:41.165044   62747 addons.go:69] Setting metrics-server=true in profile "embed-certs-617092"
	I0416 01:05:41.165081   62747 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-617092"
	I0416 01:05:41.165084   62747 addons.go:234] Setting addon metrics-server=true in "embed-certs-617092"
	W0416 01:05:41.165090   62747 addons.go:243] addon storage-provisioner should already be in state true
	W0416 01:05:41.165091   62747 addons.go:243] addon metrics-server should already be in state true
	I0416 01:05:41.165117   62747 host.go:66] Checking if "embed-certs-617092" exists ...
	I0416 01:05:41.165052   62747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:05:41.165025   62747 addons.go:69] Setting default-storageclass=true in profile "embed-certs-617092"
	I0416 01:05:41.165117   62747 host.go:66] Checking if "embed-certs-617092" exists ...
	I0416 01:05:41.165174   62747 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-617092"
	I0416 01:05:41.165464   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.165480   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.165549   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.165569   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.165549   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.165651   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.183063   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46083
	I0416 01:05:41.183551   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.184135   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.184158   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.184578   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.185298   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.185337   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.185763   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43633
	I0416 01:05:41.185823   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46197
	I0416 01:05:41.186233   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.186400   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.186701   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.186726   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.186861   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.186881   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.187211   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.187233   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.187415   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.187763   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.187781   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.191018   62747 addons.go:234] Setting addon default-storageclass=true in "embed-certs-617092"
	W0416 01:05:41.191038   62747 addons.go:243] addon default-storageclass should already be in state true
	I0416 01:05:41.191068   62747 host.go:66] Checking if "embed-certs-617092" exists ...
	I0416 01:05:41.191346   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.191384   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.202643   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45285
	I0416 01:05:41.203122   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.203607   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.203627   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.203952   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.204124   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.204325   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45643
	I0416 01:05:41.204721   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.205188   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.205207   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.205860   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.206056   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.206084   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:05:41.208051   62747 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 01:05:41.209179   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 01:05:41.209197   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 01:05:41.207724   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:05:41.209214   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:05:41.210728   62747 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:05:41.171860   61267 out.go:204]   - Generating certificates and keys ...
	I0416 01:05:41.171969   61267 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:05:41.172043   61267 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:05:41.172139   61267 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:05:41.172803   61267 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:05:41.173065   61267 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:05:41.173653   61267 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:05:41.174077   61267 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:05:41.174586   61267 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:05:41.175034   61267 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:05:41.175570   61267 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:05:41.175888   61267 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:05:41.175968   61267 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:05:41.439471   61267 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:05:41.524693   61267 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 01:05:42.001762   61267 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:05:42.139805   61267 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:05:42.198091   61267 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:05:42.198762   61267 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:05:42.202915   61267 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:05:42.204549   61267 out.go:204]   - Booting up control plane ...
	I0416 01:05:42.204673   61267 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:05:42.204816   61267 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:05:42.205761   61267 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:05:42.225187   61267 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:05:42.225917   61267 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:05:42.225972   61267 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:05:42.367087   61267 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:05:41.210575   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34385
	I0416 01:05:41.211905   62747 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:41.211923   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:05:41.211942   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:05:41.212835   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.212972   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.213577   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.213597   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.213610   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:05:41.213628   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.214039   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.214657   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.214693   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.215005   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:05:41.215635   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.215905   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:05:41.215933   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.216058   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:05:41.216109   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:05:41.216242   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:05:41.216303   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:05:41.216447   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:05:41.216466   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:05:41.216544   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:05:41.236284   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40007
	I0416 01:05:41.237670   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.238270   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.238288   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.241258   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.241453   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.243397   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:05:41.243724   62747 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:41.243740   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:05:41.243758   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:05:41.247426   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.248034   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:05:41.248144   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.248423   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:05:41.249376   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:05:41.249600   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:05:41.249799   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:05:41.414823   62747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:05:41.436007   62747 node_ready.go:35] waiting up to 6m0s for node "embed-certs-617092" to be "Ready" ...
	I0416 01:05:41.452344   62747 node_ready.go:49] node "embed-certs-617092" has status "Ready":"True"
	I0416 01:05:41.452370   62747 node_ready.go:38] duration metric: took 16.328329ms for node "embed-certs-617092" to be "Ready" ...
	I0416 01:05:41.452382   62747 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:41.467673   62747 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.477985   62747 pod_ready.go:92] pod "etcd-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:41.478019   62747 pod_ready.go:81] duration metric: took 10.312538ms for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.478032   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.485978   62747 pod_ready.go:92] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:41.486003   62747 pod_ready.go:81] duration metric: took 7.961029ms for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.486015   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.491586   62747 pod_ready.go:92] pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:41.491608   62747 pod_ready.go:81] duration metric: took 5.584682ms for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.491619   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p4rh9" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.591874   62747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:41.630528   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 01:05:41.630554   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 01:05:41.653822   62747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:41.718742   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 01:05:41.718775   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 01:05:41.750701   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:41.750725   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 01:05:41.798873   62747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:41.961373   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:41.961415   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:41.961857   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:41.961879   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:41.961890   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:41.961909   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:41.962200   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:41.962205   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:41.962216   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:41.974163   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:41.974189   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:41.974517   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:41.974537   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:42.721070   62747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.067206266s)
	I0416 01:05:42.721119   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:42.721130   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:42.721551   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:42.721594   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:42.721613   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:42.721636   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:42.721648   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:42.721972   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:42.721987   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:42.722006   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:43.123544   62747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.324616723s)
	I0416 01:05:43.123593   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:43.123608   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:43.123867   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:43.123906   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:43.123913   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:43.123922   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:43.123928   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:43.124218   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:43.124234   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:43.124234   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:43.124255   62747 addons.go:470] Verifying addon metrics-server=true in "embed-certs-617092"
	I0416 01:05:43.125829   62747 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0416 01:05:43.127138   62747 addons.go:505] duration metric: took 1.965815007s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0416 01:05:43.536374   62747 pod_ready.go:102] pod "kube-proxy-p4rh9" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:44.000571   62747 pod_ready.go:92] pod "kube-proxy-p4rh9" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:44.000594   62747 pod_ready.go:81] duration metric: took 2.508967748s for pod "kube-proxy-p4rh9" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:44.000603   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:44.006516   62747 pod_ready.go:92] pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:44.006540   62747 pod_ready.go:81] duration metric: took 5.930755ms for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:44.006546   62747 pod_ready.go:38] duration metric: took 2.554153393s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:44.006560   62747 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:05:44.006612   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:05:44.030705   62747 api_server.go:72] duration metric: took 2.869432993s to wait for apiserver process to appear ...
	I0416 01:05:44.030737   62747 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:05:44.030759   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:05:44.035576   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 200:
	ok
	I0416 01:05:44.037948   62747 api_server.go:141] control plane version: v1.29.3
	I0416 01:05:44.037973   62747 api_server.go:131] duration metric: took 7.228106ms to wait for apiserver health ...
	I0416 01:05:44.037983   62747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:05:44.044543   62747 system_pods.go:59] 9 kube-system pods found
	I0416 01:05:44.044574   62747 system_pods.go:61] "coredns-76f75df574-2q58l" [e9b9d000-738b-4110-8757-17f76197285c] Running
	I0416 01:05:44.044581   62747 system_pods.go:61] "coredns-76f75df574-h8k4k" [1b114848-1137-4215-a966-03db39e4de23] Running
	I0416 01:05:44.044586   62747 system_pods.go:61] "etcd-embed-certs-617092" [f65e9307-4e12-4ac4-baca-7e1cfd7415d5] Running
	I0416 01:05:44.044591   62747 system_pods.go:61] "kube-apiserver-embed-certs-617092" [f55e02ce-45cf-4f6e-b8d7-7f305f22ea52] Running
	I0416 01:05:44.044596   62747 system_pods.go:61] "kube-controller-manager-embed-certs-617092" [d16739c1-36f4-4748-8533-fcc6cea0adee] Running
	I0416 01:05:44.044601   62747 system_pods.go:61] "kube-proxy-p4rh9" [42041028-d085-4ec4-8213-da3af0d5290e] Running
	I0416 01:05:44.044606   62747 system_pods.go:61] "kube-scheduler-embed-certs-617092" [d61e24fe-a5e3-41bf-b212-75764a036a26] Running
	I0416 01:05:44.044614   62747 system_pods.go:61] "metrics-server-57f55c9bc5-j5clp" [99808b2d-344f-43b7-a29c-01f0a2026aa8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:44.044623   62747 system_pods.go:61] "storage-provisioner" [5a62c0f7-0b15-48f3-9c17-d5966d39fbd5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:44.044635   62747 system_pods.go:74] duration metric: took 6.6454ms to wait for pod list to return data ...
	I0416 01:05:44.044652   62747 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:05:44.241344   62747 default_sa.go:45] found service account: "default"
	I0416 01:05:44.241370   62747 default_sa.go:55] duration metric: took 196.710973ms for default service account to be created ...
	I0416 01:05:44.241379   62747 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:05:44.450798   62747 system_pods.go:86] 9 kube-system pods found
	I0416 01:05:44.450825   62747 system_pods.go:89] "coredns-76f75df574-2q58l" [e9b9d000-738b-4110-8757-17f76197285c] Running
	I0416 01:05:44.450831   62747 system_pods.go:89] "coredns-76f75df574-h8k4k" [1b114848-1137-4215-a966-03db39e4de23] Running
	I0416 01:05:44.450835   62747 system_pods.go:89] "etcd-embed-certs-617092" [f65e9307-4e12-4ac4-baca-7e1cfd7415d5] Running
	I0416 01:05:44.450839   62747 system_pods.go:89] "kube-apiserver-embed-certs-617092" [f55e02ce-45cf-4f6e-b8d7-7f305f22ea52] Running
	I0416 01:05:44.450844   62747 system_pods.go:89] "kube-controller-manager-embed-certs-617092" [d16739c1-36f4-4748-8533-fcc6cea0adee] Running
	I0416 01:05:44.450848   62747 system_pods.go:89] "kube-proxy-p4rh9" [42041028-d085-4ec4-8213-da3af0d5290e] Running
	I0416 01:05:44.450851   62747 system_pods.go:89] "kube-scheduler-embed-certs-617092" [d61e24fe-a5e3-41bf-b212-75764a036a26] Running
	I0416 01:05:44.450858   62747 system_pods.go:89] "metrics-server-57f55c9bc5-j5clp" [99808b2d-344f-43b7-a29c-01f0a2026aa8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:44.450864   62747 system_pods.go:89] "storage-provisioner" [5a62c0f7-0b15-48f3-9c17-d5966d39fbd5] Running
	I0416 01:05:44.450871   62747 system_pods.go:126] duration metric: took 209.487599ms to wait for k8s-apps to be running ...
	I0416 01:05:44.450889   62747 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:05:44.450943   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:44.470820   62747 system_svc.go:56] duration metric: took 19.925743ms WaitForService to wait for kubelet
	I0416 01:05:44.470853   62747 kubeadm.go:576] duration metric: took 3.309585995s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:05:44.470876   62747 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:05:44.642093   62747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:05:44.642123   62747 node_conditions.go:123] node cpu capacity is 2
	I0416 01:05:44.642135   62747 node_conditions.go:105] duration metric: took 171.253415ms to run NodePressure ...
	I0416 01:05:44.642149   62747 start.go:240] waiting for startup goroutines ...
	I0416 01:05:44.642158   62747 start.go:245] waiting for cluster config update ...
	I0416 01:05:44.642171   62747 start.go:254] writing updated cluster config ...
	I0416 01:05:44.642519   62747 ssh_runner.go:195] Run: rm -f paused
	I0416 01:05:44.707141   62747 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 01:05:44.709274   62747 out.go:177] * Done! kubectl is now configured to use "embed-certs-617092" cluster and "default" namespace by default
	I0416 01:05:48.372574   61267 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002543 seconds
	I0416 01:05:48.385076   61267 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 01:05:48.406058   61267 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 01:05:48.938329   61267 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 01:05:48.938556   61267 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-653942 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 01:05:49.458321   61267 kubeadm.go:309] [bootstrap-token] Using token: 5ddaoe.tvzldvzlkbeta1a9
	I0416 01:05:49.459891   61267 out.go:204]   - Configuring RBAC rules ...
	I0416 01:05:49.460064   61267 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 01:05:49.465799   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 01:05:49.477346   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 01:05:49.482154   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 01:05:49.485769   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 01:05:49.489199   61267 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 01:05:49.504774   61267 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 01:05:49.770133   61267 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 01:05:49.872777   61267 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 01:05:49.874282   61267 kubeadm.go:309] 
	I0416 01:05:49.874384   61267 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 01:05:49.874400   61267 kubeadm.go:309] 
	I0416 01:05:49.874560   61267 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 01:05:49.874580   61267 kubeadm.go:309] 
	I0416 01:05:49.874602   61267 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 01:05:49.874673   61267 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 01:05:49.874754   61267 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 01:05:49.874766   61267 kubeadm.go:309] 
	I0416 01:05:49.874853   61267 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 01:05:49.874878   61267 kubeadm.go:309] 
	I0416 01:05:49.874944   61267 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 01:05:49.874956   61267 kubeadm.go:309] 
	I0416 01:05:49.875019   61267 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 01:05:49.875141   61267 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 01:05:49.875246   61267 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 01:05:49.875257   61267 kubeadm.go:309] 
	I0416 01:05:49.875432   61267 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 01:05:49.875552   61267 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 01:05:49.875562   61267 kubeadm.go:309] 
	I0416 01:05:49.875657   61267 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 5ddaoe.tvzldvzlkbeta1a9 \
	I0416 01:05:49.875754   61267 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0416 01:05:49.875774   61267 kubeadm.go:309] 	--control-plane 
	I0416 01:05:49.875780   61267 kubeadm.go:309] 
	I0416 01:05:49.875859   61267 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 01:05:49.875869   61267 kubeadm.go:309] 
	I0416 01:05:49.875949   61267 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 5ddaoe.tvzldvzlkbeta1a9 \
	I0416 01:05:49.876085   61267 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0416 01:05:49.876640   61267 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:05:49.876666   61267 cni.go:84] Creating CNI manager for ""
	I0416 01:05:49.876676   61267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:05:49.878703   61267 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:05:49.880070   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:05:49.897752   61267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:05:49.969146   61267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 01:05:49.969228   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:49.969228   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-653942 minikube.k8s.io/updated_at=2024_04_16T01_05_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=default-k8s-diff-port-653942 minikube.k8s.io/primary=true
	I0416 01:05:50.233119   61267 ops.go:34] apiserver oom_adj: -16
	I0416 01:05:50.233262   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:50.733748   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:51.234361   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:51.733704   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:52.233367   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:52.733789   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:53.234012   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:53.733458   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:54.233341   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:54.734148   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:55.233710   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:55.734135   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:56.233315   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:56.734162   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:57.233899   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:57.733337   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:58.234101   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:58.734357   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:59.233831   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:59.733286   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:00.233847   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:00.733872   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:01.233935   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:01.733629   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:02.233967   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:02.734163   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:03.233294   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:03.412834   61267 kubeadm.go:1107] duration metric: took 13.44368469s to wait for elevateKubeSystemPrivileges
	W0416 01:06:03.412896   61267 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:06:03.412907   61267 kubeadm.go:393] duration metric: took 5m17.8108087s to StartCluster
	I0416 01:06:03.412926   61267 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:06:03.413003   61267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:06:03.414974   61267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:06:03.415299   61267 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.216 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:06:03.417148   61267 out.go:177] * Verifying Kubernetes components...
	I0416 01:06:03.415390   61267 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:06:03.415510   61267 config.go:182] Loaded profile config "default-k8s-diff-port-653942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:06:03.417238   61267 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-653942"
	I0416 01:06:03.419134   61267 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-653942"
	W0416 01:06:03.419147   61267 addons.go:243] addon storage-provisioner should already be in state true
	I0416 01:06:03.417247   61267 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-653942"
	I0416 01:06:03.419188   61267 host.go:66] Checking if "default-k8s-diff-port-653942" exists ...
	I0416 01:06:03.419214   61267 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-653942"
	I0416 01:06:03.417245   61267 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-653942"
	I0416 01:06:03.419095   61267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0416 01:06:03.419262   61267 addons.go:243] addon metrics-server should already be in state true
	I0416 01:06:03.419307   61267 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-653942"
	I0416 01:06:03.419327   61267 host.go:66] Checking if "default-k8s-diff-port-653942" exists ...
	I0416 01:06:03.419606   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.419644   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.419662   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.419698   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.419722   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.419756   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.435784   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
	I0416 01:06:03.435800   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37505
	I0416 01:06:03.436294   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.436296   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.436811   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.436838   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.437097   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.437115   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.437203   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.437683   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.437757   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.437790   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.438213   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33329
	I0416 01:06:03.438248   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.438273   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.438786   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.439301   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.439332   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.439810   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.440162   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.443879   61267 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-653942"
	W0416 01:06:03.443906   61267 addons.go:243] addon default-storageclass should already be in state true
	I0416 01:06:03.443941   61267 host.go:66] Checking if "default-k8s-diff-port-653942" exists ...
	I0416 01:06:03.444301   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.444340   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.454673   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I0416 01:06:03.455111   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.455715   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.455742   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.456116   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.456318   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.457870   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39341
	I0416 01:06:03.458086   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:06:03.458278   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.462516   61267 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 01:06:03.458862   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.460354   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0416 01:06:03.464491   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 01:06:03.464509   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 01:06:03.464529   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:06:03.464551   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.464960   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.465281   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.465552   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.466181   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.466205   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.466760   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.467410   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.467435   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.467638   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:06:03.469647   61267 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:06:03.471009   61267 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:06:03.471024   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:06:03.469242   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.471040   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:06:03.469767   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:06:03.471070   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:06:03.471133   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.471297   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:06:03.471478   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:06:03.471661   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:06:03.473778   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.474203   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:06:03.474226   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.474421   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:06:03.474605   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:06:03.474784   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:06:03.474958   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:06:03.485829   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0416 01:06:03.486293   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.486876   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.486900   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.487362   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.487535   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.489207   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:06:03.489529   61267 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:06:03.489549   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:06:03.489568   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:06:03.492570   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.492932   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:06:03.492958   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.493224   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:06:03.493379   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:06:03.493557   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:06:03.493673   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:06:03.680085   61267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:06:03.724011   61267 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-653942" to be "Ready" ...
	I0416 01:06:03.739131   61267 node_ready.go:49] node "default-k8s-diff-port-653942" has status "Ready":"True"
	I0416 01:06:03.739152   61267 node_ready.go:38] duration metric: took 15.111832ms for node "default-k8s-diff-port-653942" to be "Ready" ...
	I0416 01:06:03.739161   61267 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:06:03.748081   61267 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-5nnpv" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:03.810063   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 01:06:03.810090   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 01:06:03.812595   61267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:06:03.848165   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 01:06:03.848187   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 01:06:03.991110   61267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:06:03.997100   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:06:03.997133   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 01:06:04.093267   61267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:06:04.349978   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:04.350011   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:04.350336   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:04.350396   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:04.350415   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:04.350420   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:04.350425   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:04.350683   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:04.350699   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:04.416648   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:04.416674   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:04.416982   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:04.417001   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.206973   61267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113663167s)
	I0416 01:06:05.207025   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207040   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207039   61267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.215892308s)
	I0416 01:06:05.207078   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207090   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207371   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.207388   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.207397   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207405   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207445   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.207462   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:05.207466   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.207490   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207508   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207610   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.207644   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.207654   61267 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-653942"
	I0416 01:06:05.207654   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:05.209411   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:05.209402   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.209469   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.212071   61267 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0416 01:06:05.213412   61267 addons.go:505] duration metric: took 1.798038731s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0416 01:06:05.256497   61267 pod_ready.go:92] pod "coredns-76f75df574-5nnpv" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.256526   61267 pod_ready.go:81] duration metric: took 1.508419977s for pod "coredns-76f75df574-5nnpv" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.256538   61267 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-zpnhs" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.262092   61267 pod_ready.go:92] pod "coredns-76f75df574-zpnhs" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.262112   61267 pod_ready.go:81] duration metric: took 5.566499ms for pod "coredns-76f75df574-zpnhs" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.262121   61267 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.267256   61267 pod_ready.go:92] pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.267278   61267 pod_ready.go:81] duration metric: took 5.149782ms for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.267286   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.272119   61267 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.272144   61267 pod_ready.go:81] duration metric: took 4.851008ms for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.272155   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.328440   61267 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.328470   61267 pod_ready.go:81] duration metric: took 56.30531ms for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.328482   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mg5km" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.729518   61267 pod_ready.go:92] pod "kube-proxy-mg5km" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.729544   61267 pod_ready.go:81] duration metric: took 401.055058ms for pod "kube-proxy-mg5km" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.729553   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:06.127535   61267 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:06.127558   61267 pod_ready.go:81] duration metric: took 397.998988ms for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:06.127565   61267 pod_ready.go:38] duration metric: took 2.388395448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:06:06.127577   61267 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:06:06.127620   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:06:06.150179   61267 api_server.go:72] duration metric: took 2.734842767s to wait for apiserver process to appear ...
	I0416 01:06:06.150208   61267 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:06:06.150226   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:06:06.154310   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 200:
	ok
	I0416 01:06:06.155393   61267 api_server.go:141] control plane version: v1.29.3
	I0416 01:06:06.155409   61267 api_server.go:131] duration metric: took 5.194458ms to wait for apiserver health ...
	I0416 01:06:06.155421   61267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:06:06.333873   61267 system_pods.go:59] 9 kube-system pods found
	I0416 01:06:06.333909   61267 system_pods.go:61] "coredns-76f75df574-5nnpv" [3350aca5-639e-44a1-bd84-d1e4b6486143] Running
	I0416 01:06:06.333914   61267 system_pods.go:61] "coredns-76f75df574-zpnhs" [990672b6-bb3a-4f91-8de7-7c2ec224c94a] Running
	I0416 01:06:06.333917   61267 system_pods.go:61] "etcd-default-k8s-diff-port-653942" [e72e89e9-c274-4d4d-b1f9-43bea95cd015] Running
	I0416 01:06:06.333920   61267 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653942" [c1652126-b4c2-41cf-a574-9784f7800374] Running
	I0416 01:06:06.333923   61267 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653942" [1f43936c-ba39-44f9-b9b7-2a149f26a880] Running
	I0416 01:06:06.333926   61267 system_pods.go:61] "kube-proxy-mg5km" [74764194-1f31-40b1-90b5-497e248ab7da] Running
	I0416 01:06:06.333929   61267 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653942" [48058ade-c30d-4dc9-b6c0-32b2ed5fc88a] Running
	I0416 01:06:06.333935   61267 system_pods.go:61] "metrics-server-57f55c9bc5-6jn29" [1eec2ffb-ce59-45cb-b6b4-cd010549510e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:06:06.333938   61267 system_pods.go:61] "storage-provisioner" [d131c1fc-9124-4b46-a16f-a8fb5029a57b] Running
	I0416 01:06:06.333947   61267 system_pods.go:74] duration metric: took 178.520515ms to wait for pod list to return data ...
	I0416 01:06:06.333953   61267 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:06:06.528119   61267 default_sa.go:45] found service account: "default"
	I0416 01:06:06.528148   61267 default_sa.go:55] duration metric: took 194.18786ms for default service account to be created ...
	I0416 01:06:06.528158   61267 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:06:06.731573   61267 system_pods.go:86] 9 kube-system pods found
	I0416 01:06:06.731600   61267 system_pods.go:89] "coredns-76f75df574-5nnpv" [3350aca5-639e-44a1-bd84-d1e4b6486143] Running
	I0416 01:06:06.731606   61267 system_pods.go:89] "coredns-76f75df574-zpnhs" [990672b6-bb3a-4f91-8de7-7c2ec224c94a] Running
	I0416 01:06:06.731610   61267 system_pods.go:89] "etcd-default-k8s-diff-port-653942" [e72e89e9-c274-4d4d-b1f9-43bea95cd015] Running
	I0416 01:06:06.731614   61267 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-653942" [c1652126-b4c2-41cf-a574-9784f7800374] Running
	I0416 01:06:06.731619   61267 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-653942" [1f43936c-ba39-44f9-b9b7-2a149f26a880] Running
	I0416 01:06:06.731622   61267 system_pods.go:89] "kube-proxy-mg5km" [74764194-1f31-40b1-90b5-497e248ab7da] Running
	I0416 01:06:06.731626   61267 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-653942" [48058ade-c30d-4dc9-b6c0-32b2ed5fc88a] Running
	I0416 01:06:06.731633   61267 system_pods.go:89] "metrics-server-57f55c9bc5-6jn29" [1eec2ffb-ce59-45cb-b6b4-cd010549510e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:06:06.731638   61267 system_pods.go:89] "storage-provisioner" [d131c1fc-9124-4b46-a16f-a8fb5029a57b] Running
	I0416 01:06:06.731649   61267 system_pods.go:126] duration metric: took 203.485273ms to wait for k8s-apps to be running ...
	I0416 01:06:06.731659   61267 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:06:06.731700   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:06:06.749013   61267 system_svc.go:56] duration metric: took 17.343008ms WaitForService to wait for kubelet
	I0416 01:06:06.749048   61267 kubeadm.go:576] duration metric: took 3.333716529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:06:06.749072   61267 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:06:06.927701   61267 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:06:06.927725   61267 node_conditions.go:123] node cpu capacity is 2
	I0416 01:06:06.927735   61267 node_conditions.go:105] duration metric: took 178.65899ms to run NodePressure ...
	I0416 01:06:06.927746   61267 start.go:240] waiting for startup goroutines ...
	I0416 01:06:06.927754   61267 start.go:245] waiting for cluster config update ...
	I0416 01:06:06.927763   61267 start.go:254] writing updated cluster config ...
	I0416 01:06:06.928000   61267 ssh_runner.go:195] Run: rm -f paused
	I0416 01:06:06.978823   61267 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 01:06:06.981011   61267 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-653942" cluster and "default" namespace by default
	I0416 01:06:14.261576   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:06:14.261834   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:06:14.261849   62139 kubeadm.go:309] 
	I0416 01:06:14.261890   62139 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 01:06:14.261973   62139 kubeadm.go:309] 		timed out waiting for the condition
	I0416 01:06:14.262006   62139 kubeadm.go:309] 
	I0416 01:06:14.262051   62139 kubeadm.go:309] 	This error is likely caused by:
	I0416 01:06:14.262082   62139 kubeadm.go:309] 		- The kubelet is not running
	I0416 01:06:14.262174   62139 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 01:06:14.262199   62139 kubeadm.go:309] 
	I0416 01:06:14.262357   62139 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 01:06:14.262414   62139 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 01:06:14.262471   62139 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 01:06:14.262481   62139 kubeadm.go:309] 
	I0416 01:06:14.262610   62139 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 01:06:14.262707   62139 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 01:06:14.262717   62139 kubeadm.go:309] 
	I0416 01:06:14.262867   62139 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 01:06:14.263010   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 01:06:14.263142   62139 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 01:06:14.263211   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 01:06:14.263234   62139 kubeadm.go:309] 
	I0416 01:06:14.264084   62139 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:06:14.264204   62139 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 01:06:14.264312   62139 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0416 01:06:14.264460   62139 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0416 01:06:14.264526   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:06:15.653692   62139 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.389136497s)
	I0416 01:06:15.653831   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:06:15.669141   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:06:15.679485   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:06:15.679511   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:06:15.679556   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:06:15.689898   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:06:15.689974   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:06:15.700563   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:06:15.710363   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:06:15.710445   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:06:15.719877   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:06:15.728947   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:06:15.729002   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:06:15.739360   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:06:15.749479   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:06:15.749557   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:06:15.760930   62139 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:06:16.000974   62139 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:08:12.327133   62139 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 01:08:12.327246   62139 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0416 01:08:12.328995   62139 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 01:08:12.329092   62139 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:08:12.329220   62139 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:08:12.329302   62139 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:08:12.329440   62139 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:08:12.329537   62139 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:08:12.331381   62139 out.go:204]   - Generating certificates and keys ...
	I0416 01:08:12.331474   62139 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:08:12.331558   62139 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:08:12.331658   62139 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:08:12.331742   62139 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:08:12.331830   62139 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:08:12.331910   62139 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:08:12.331968   62139 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:08:12.332020   62139 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:08:12.332085   62139 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:08:12.332159   62139 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:08:12.332210   62139 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:08:12.332297   62139 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:08:12.332376   62139 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:08:12.332466   62139 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:08:12.332547   62139 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:08:12.332642   62139 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:08:12.332790   62139 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:08:12.332895   62139 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:08:12.332938   62139 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:08:12.333002   62139 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:08:12.334632   62139 out.go:204]   - Booting up control plane ...
	I0416 01:08:12.334737   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:08:12.334837   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:08:12.334928   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:08:12.335009   62139 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:08:12.335162   62139 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:08:12.335241   62139 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 01:08:12.335333   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.335541   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.335613   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.335771   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.335848   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336035   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336109   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336365   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336438   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336704   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336716   62139 kubeadm.go:309] 
	I0416 01:08:12.336779   62139 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 01:08:12.336827   62139 kubeadm.go:309] 		timed out waiting for the condition
	I0416 01:08:12.336834   62139 kubeadm.go:309] 
	I0416 01:08:12.336883   62139 kubeadm.go:309] 	This error is likely caused by:
	I0416 01:08:12.336922   62139 kubeadm.go:309] 		- The kubelet is not running
	I0416 01:08:12.337025   62139 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 01:08:12.337036   62139 kubeadm.go:309] 
	I0416 01:08:12.337145   62139 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 01:08:12.337211   62139 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 01:08:12.337245   62139 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 01:08:12.337253   62139 kubeadm.go:309] 
	I0416 01:08:12.337340   62139 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 01:08:12.337428   62139 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 01:08:12.337436   62139 kubeadm.go:309] 
	I0416 01:08:12.337529   62139 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 01:08:12.337602   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 01:08:12.337701   62139 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 01:08:12.337870   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 01:08:12.337957   62139 kubeadm.go:393] duration metric: took 8m4.174818047s to StartCluster
	I0416 01:08:12.337969   62139 kubeadm.go:309] 
	I0416 01:08:12.338009   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:08:12.338067   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:08:12.391937   62139 cri.go:89] found id: ""
	I0416 01:08:12.391963   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.391986   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:08:12.391994   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:08:12.392072   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:08:12.430575   62139 cri.go:89] found id: ""
	I0416 01:08:12.430602   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.430616   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:08:12.430623   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:08:12.430685   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:08:12.469115   62139 cri.go:89] found id: ""
	I0416 01:08:12.469143   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.469152   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:08:12.469173   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:08:12.469228   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:08:12.508599   62139 cri.go:89] found id: ""
	I0416 01:08:12.508630   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.508640   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:08:12.508648   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:08:12.508698   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:08:12.547785   62139 cri.go:89] found id: ""
	I0416 01:08:12.547817   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.547829   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:08:12.547836   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:08:12.547910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:08:12.599526   62139 cri.go:89] found id: ""
	I0416 01:08:12.599549   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.599557   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:08:12.599563   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:08:12.599612   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:08:12.639914   62139 cri.go:89] found id: ""
	I0416 01:08:12.639944   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.639954   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:08:12.639962   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:08:12.640041   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:08:12.676025   62139 cri.go:89] found id: ""
	I0416 01:08:12.676057   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.676066   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:08:12.676079   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:08:12.676100   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:08:12.774744   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:08:12.774769   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:08:12.774785   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:08:12.902751   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:08:12.902787   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:08:12.947370   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:08:12.947406   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:08:13.002186   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:08:13.002223   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0416 01:08:13.017193   62139 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0416 01:08:13.017234   62139 out.go:239] * 
	W0416 01:08:13.017283   62139 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 01:08:13.017304   62139 out.go:239] * 
	W0416 01:08:13.018151   62139 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 01:08:13.021371   62139 out.go:177] 
	W0416 01:08:13.022572   62139 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 01:08:13.022640   62139 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0416 01:08:13.022670   62139 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0416 01:08:13.024248   62139 out.go:177] 
	
	
	==> CRI-O <==
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.056608961Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230238056582161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e8449301-e1cf-4dc0-ac64-304e4178d384 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.057304780Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62ed2d61-ee3b-419b-bc07-b85a02145b0d name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.057365606Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62ed2d61-ee3b-419b-bc07-b85a02145b0d name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.057458553Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=62ed2d61-ee3b-419b-bc07-b85a02145b0d name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.094420220Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3c852421-b9e0-4904-9a9d-a81581b7e991 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.094501698Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c852421-b9e0-4904-9a9d-a81581b7e991 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.095625016Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cfe0a3f2-b968-49f4-b3ac-5997c8225d3e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.096192161Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230238096161952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cfe0a3f2-b968-49f4-b3ac-5997c8225d3e name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.096859922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=de8cb4d8-1ef7-4a0c-83ea-9b3e2f3cb2b4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.096916678Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=de8cb4d8-1ef7-4a0c-83ea-9b3e2f3cb2b4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.096955659Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=de8cb4d8-1ef7-4a0c-83ea-9b3e2f3cb2b4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.131301718Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=585d866a-7fbc-40ef-8db5-9414ed00d3ce name=/runtime.v1.RuntimeService/Version
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.131433907Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=585d866a-7fbc-40ef-8db5-9414ed00d3ce name=/runtime.v1.RuntimeService/Version
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.132772221Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5fe588fd-2754-4ab8-bc1f-aa1b2d9c7c15 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.133233795Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230238133205438,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5fe588fd-2754-4ab8-bc1f-aa1b2d9c7c15 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.133880435Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=93dc8e08-6532-4470-ab09-71d22e4df8b6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.133959706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=93dc8e08-6532-4470-ab09-71d22e4df8b6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.134000060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=93dc8e08-6532-4470-ab09-71d22e4df8b6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.170973190Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf3be10c-edf4-4200-9fa6-4b287358b8e6 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.171090932Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf3be10c-edf4-4200-9fa6-4b287358b8e6 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.172297151Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=69acf8c2-b61f-472e-adaf-0ede8bf63673 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.172856455Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230238172748682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69acf8c2-b61f-472e-adaf-0ede8bf63673 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.173402865Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62ee7d1c-07f1-4896-bc31-408e23936c8d name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.173458681Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62ee7d1c-07f1-4896-bc31-408e23936c8d name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:17:18 old-k8s-version-800769 crio[651]: time="2024-04-16 01:17:18.173494656Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=62ee7d1c-07f1-4896-bc31-408e23936c8d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr16 00:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052487] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041260] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.659381] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.701128] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.498139] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.532362] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.139625] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.184218] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.154369] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[Apr16 01:00] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.904893] systemd-fstab-generator[836]: Ignoring "noauto" option for root device
	[  +0.058661] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.131661] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[ +13.736441] kauditd_printk_skb: 46 callbacks suppressed
	[Apr16 01:04] systemd-fstab-generator[5023]: Ignoring "noauto" option for root device
	[Apr16 01:06] systemd-fstab-generator[5299]: Ignoring "noauto" option for root device
	[  +0.072728] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:17:18 up 17 min,  0 users,  load average: 0.02, 0.03, 0.00
	Linux old-k8s-version-800769 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6467]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc000986ea0)
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6467]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6467]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6467]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6467]: goroutine 153 [select]:
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6467]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000bddef0, 0x4f0ac20, 0xc000aa4190, 0x1, 0xc0001020c0)
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6467]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6467]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000924c40, 0xc0001020c0)
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6467]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6467]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6467]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6467]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc00098acf0, 0xc000979ea0)
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6467]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6467]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6467]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Apr 16 01:17:13 old-k8s-version-800769 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Apr 16 01:17:13 old-k8s-version-800769 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 16 01:17:13 old-k8s-version-800769 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Apr 16 01:17:13 old-k8s-version-800769 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 16 01:17:13 old-k8s-version-800769 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6477]: I0416 01:17:13.833762    6477 server.go:416] Version: v1.20.0
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6477]: I0416 01:17:13.834236    6477 server.go:837] Client rotation is on, will bootstrap in background
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6477]: I0416 01:17:13.836754    6477 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6477]: W0416 01:17:13.837920    6477 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 16 01:17:13 old-k8s-version-800769 kubelet[6477]: I0416 01:17:13.837986    6477 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-800769 -n old-k8s-version-800769
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-800769 -n old-k8s-version-800769: exit status 2 (260.784895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-800769" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (375.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-572602 -n no-preload-572602
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-16 01:20:32.111538351 +0000 UTC m=+6174.628299670
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-572602 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-572602 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.495µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-572602 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-572602 -n no-preload-572602
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-572602 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-572602 logs -n 25: (1.215214191s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-012509             | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-012509                  | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-012509 --memory=2200 --alsologtostderr   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:54 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| image   | newest-cni-012509 image list                           | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --format=json                                          |                              |         |                |                     |                     |
	| pause   | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	| delete  | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	| delete  | -p                                                     | disable-driver-mounts-988802 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | disable-driver-mounts-988802                           |                              |         |                |                     |                     |
	| start   | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653942       | default-k8s-diff-port-653942 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-572602                  | no-preload-572602            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653942 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 01:06 UTC |
	|         | default-k8s-diff-port-653942                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-800769        | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| start   | -p no-preload-572602                                   | no-preload-572602            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 01:05 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-617092            | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-800769                              | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-800769             | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-800769                              | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-617092                 | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:58 UTC | 16 Apr 24 01:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| delete  | -p old-k8s-version-800769                              | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:20 UTC | 16 Apr 24 01:20 UTC |
	| start   | -p auto-381983 --memory=3072                           | auto-381983                  | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:20 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 01:20:19
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 01:20:19.200212   69035 out.go:291] Setting OutFile to fd 1 ...
	I0416 01:20:19.200329   69035 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 01:20:19.200338   69035 out.go:304] Setting ErrFile to fd 2...
	I0416 01:20:19.200343   69035 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 01:20:19.200542   69035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 01:20:19.201131   69035 out.go:298] Setting JSON to false
	I0416 01:20:19.202166   69035 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7363,"bootTime":1713223056,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 01:20:19.202226   69035 start.go:139] virtualization: kvm guest
	I0416 01:20:19.204646   69035 out.go:177] * [auto-381983] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 01:20:19.206471   69035 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 01:20:19.206468   69035 notify.go:220] Checking for updates...
	I0416 01:20:19.208073   69035 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 01:20:19.209514   69035 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:20:19.210968   69035 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 01:20:19.212412   69035 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 01:20:19.214014   69035 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 01:20:19.215982   69035 config.go:182] Loaded profile config "default-k8s-diff-port-653942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:20:19.216069   69035 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:20:19.216152   69035 config.go:182] Loaded profile config "no-preload-572602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 01:20:19.216231   69035 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 01:20:19.253410   69035 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 01:20:19.254710   69035 start.go:297] selected driver: kvm2
	I0416 01:20:19.254721   69035 start.go:901] validating driver "kvm2" against <nil>
	I0416 01:20:19.254733   69035 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 01:20:19.255452   69035 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 01:20:19.255535   69035 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 01:20:19.269670   69035 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 01:20:19.269709   69035 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 01:20:19.269908   69035 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:20:19.269966   69035 cni.go:84] Creating CNI manager for ""
	I0416 01:20:19.269978   69035 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:20:19.269989   69035 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0416 01:20:19.270033   69035 start.go:340] cluster config:
	{Name:auto-381983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:auto-381983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:20:19.270117   69035 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 01:20:19.272010   69035 out.go:177] * Starting "auto-381983" primary control-plane node in "auto-381983" cluster
	I0416 01:20:19.273440   69035 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 01:20:19.273479   69035 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 01:20:19.273502   69035 cache.go:56] Caching tarball of preloaded images
	I0416 01:20:19.273597   69035 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 01:20:19.273611   69035 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 01:20:19.273711   69035 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/auto-381983/config.json ...
	I0416 01:20:19.273735   69035 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/auto-381983/config.json: {Name:mk118fc2f4dd48ca8c61c5a684a419538b1f3711 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:20:19.273870   69035 start.go:360] acquireMachinesLock for auto-381983: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 01:20:19.273900   69035 start.go:364] duration metric: took 16.027µs to acquireMachinesLock for "auto-381983"
	I0416 01:20:19.273912   69035 start.go:93] Provisioning new machine with config: &{Name:auto-381983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.29.3 ClusterName:auto-381983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:20:19.273970   69035 start.go:125] createHost starting for "" (driver="kvm2")
	I0416 01:20:19.276215   69035 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0416 01:20:19.276364   69035 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:20:19.276401   69035 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:20:19.290233   69035 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39857
	I0416 01:20:19.290719   69035 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:20:19.291381   69035 main.go:141] libmachine: Using API Version  1
	I0416 01:20:19.291446   69035 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:20:19.291892   69035 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:20:19.292082   69035 main.go:141] libmachine: (auto-381983) Calling .GetMachineName
	I0416 01:20:19.292240   69035 main.go:141] libmachine: (auto-381983) Calling .DriverName
	I0416 01:20:19.292390   69035 start.go:159] libmachine.API.Create for "auto-381983" (driver="kvm2")
	I0416 01:20:19.292416   69035 client.go:168] LocalClient.Create starting
	I0416 01:20:19.292450   69035 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem
	I0416 01:20:19.292481   69035 main.go:141] libmachine: Decoding PEM data...
	I0416 01:20:19.292498   69035 main.go:141] libmachine: Parsing certificate...
	I0416 01:20:19.292553   69035 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem
	I0416 01:20:19.292571   69035 main.go:141] libmachine: Decoding PEM data...
	I0416 01:20:19.292584   69035 main.go:141] libmachine: Parsing certificate...
	I0416 01:20:19.292599   69035 main.go:141] libmachine: Running pre-create checks...
	I0416 01:20:19.292612   69035 main.go:141] libmachine: (auto-381983) Calling .PreCreateCheck
	I0416 01:20:19.292938   69035 main.go:141] libmachine: (auto-381983) Calling .GetConfigRaw
	I0416 01:20:19.293299   69035 main.go:141] libmachine: Creating machine...
	I0416 01:20:19.293312   69035 main.go:141] libmachine: (auto-381983) Calling .Create
	I0416 01:20:19.293448   69035 main.go:141] libmachine: (auto-381983) Creating KVM machine...
	I0416 01:20:19.294754   69035 main.go:141] libmachine: (auto-381983) DBG | found existing default KVM network
	I0416 01:20:19.295989   69035 main.go:141] libmachine: (auto-381983) DBG | I0416 01:20:19.295833   69058 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:95:13:44} reservation:<nil>}
	I0416 01:20:19.296934   69035 main.go:141] libmachine: (auto-381983) DBG | I0416 01:20:19.296849   69058 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f2:5d:73} reservation:<nil>}
	I0416 01:20:19.297936   69035 main.go:141] libmachine: (auto-381983) DBG | I0416 01:20:19.297861   69058 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:aa:4f:99} reservation:<nil>}
	I0416 01:20:19.298884   69035 main.go:141] libmachine: (auto-381983) DBG | I0416 01:20:19.298820   69058 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000326fc0}
	I0416 01:20:19.298934   69035 main.go:141] libmachine: (auto-381983) DBG | created network xml: 
	I0416 01:20:19.298951   69035 main.go:141] libmachine: (auto-381983) DBG | <network>
	I0416 01:20:19.298963   69035 main.go:141] libmachine: (auto-381983) DBG |   <name>mk-auto-381983</name>
	I0416 01:20:19.298975   69035 main.go:141] libmachine: (auto-381983) DBG |   <dns enable='no'/>
	I0416 01:20:19.298986   69035 main.go:141] libmachine: (auto-381983) DBG |   
	I0416 01:20:19.299000   69035 main.go:141] libmachine: (auto-381983) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0416 01:20:19.299027   69035 main.go:141] libmachine: (auto-381983) DBG |     <dhcp>
	I0416 01:20:19.299047   69035 main.go:141] libmachine: (auto-381983) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0416 01:20:19.299059   69035 main.go:141] libmachine: (auto-381983) DBG |     </dhcp>
	I0416 01:20:19.299073   69035 main.go:141] libmachine: (auto-381983) DBG |   </ip>
	I0416 01:20:19.299085   69035 main.go:141] libmachine: (auto-381983) DBG |   
	I0416 01:20:19.299095   69035 main.go:141] libmachine: (auto-381983) DBG | </network>
	I0416 01:20:19.299105   69035 main.go:141] libmachine: (auto-381983) DBG | 
	I0416 01:20:19.304334   69035 main.go:141] libmachine: (auto-381983) DBG | trying to create private KVM network mk-auto-381983 192.168.72.0/24...
	I0416 01:20:19.374993   69035 main.go:141] libmachine: (auto-381983) DBG | private KVM network mk-auto-381983 192.168.72.0/24 created
	I0416 01:20:19.375029   69035 main.go:141] libmachine: (auto-381983) Setting up store path in /home/jenkins/minikube-integration/18647-7542/.minikube/machines/auto-381983 ...
	I0416 01:20:19.375042   69035 main.go:141] libmachine: (auto-381983) DBG | I0416 01:20:19.374961   69058 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 01:20:19.375055   69035 main.go:141] libmachine: (auto-381983) Building disk image from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso
	I0416 01:20:19.375078   69035 main.go:141] libmachine: (auto-381983) Downloading /home/jenkins/minikube-integration/18647-7542/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0416 01:20:19.599084   69035 main.go:141] libmachine: (auto-381983) DBG | I0416 01:20:19.598955   69058 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/auto-381983/id_rsa...
	I0416 01:20:19.757514   69035 main.go:141] libmachine: (auto-381983) DBG | I0416 01:20:19.757373   69058 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/auto-381983/auto-381983.rawdisk...
	I0416 01:20:19.757555   69035 main.go:141] libmachine: (auto-381983) DBG | Writing magic tar header
	I0416 01:20:19.757573   69035 main.go:141] libmachine: (auto-381983) DBG | Writing SSH key tar header
	I0416 01:20:19.757585   69035 main.go:141] libmachine: (auto-381983) DBG | I0416 01:20:19.757528   69058 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/auto-381983 ...
	I0416 01:20:19.757808   69035 main.go:141] libmachine: (auto-381983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/auto-381983
	I0416 01:20:19.757844   69035 main.go:141] libmachine: (auto-381983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines
	I0416 01:20:19.757859   69035 main.go:141] libmachine: (auto-381983) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/auto-381983 (perms=drwx------)
	I0416 01:20:19.757873   69035 main.go:141] libmachine: (auto-381983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 01:20:19.757887   69035 main.go:141] libmachine: (auto-381983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542
	I0416 01:20:19.757904   69035 main.go:141] libmachine: (auto-381983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 01:20:19.757922   69035 main.go:141] libmachine: (auto-381983) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines (perms=drwxr-xr-x)
	I0416 01:20:19.757934   69035 main.go:141] libmachine: (auto-381983) DBG | Checking permissions on dir: /home/jenkins
	I0416 01:20:19.757960   69035 main.go:141] libmachine: (auto-381983) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube (perms=drwxr-xr-x)
	I0416 01:20:19.757972   69035 main.go:141] libmachine: (auto-381983) DBG | Checking permissions on dir: /home
	I0416 01:20:19.757998   69035 main.go:141] libmachine: (auto-381983) DBG | Skipping /home - not owner
	I0416 01:20:19.758014   69035 main.go:141] libmachine: (auto-381983) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542 (perms=drwxrwxr-x)
	I0416 01:20:19.758023   69035 main.go:141] libmachine: (auto-381983) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 01:20:19.758092   69035 main.go:141] libmachine: (auto-381983) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 01:20:19.758125   69035 main.go:141] libmachine: (auto-381983) Creating domain...
	I0416 01:20:19.759063   69035 main.go:141] libmachine: (auto-381983) define libvirt domain using xml: 
	I0416 01:20:19.759085   69035 main.go:141] libmachine: (auto-381983) <domain type='kvm'>
	I0416 01:20:19.759109   69035 main.go:141] libmachine: (auto-381983)   <name>auto-381983</name>
	I0416 01:20:19.759132   69035 main.go:141] libmachine: (auto-381983)   <memory unit='MiB'>3072</memory>
	I0416 01:20:19.759145   69035 main.go:141] libmachine: (auto-381983)   <vcpu>2</vcpu>
	I0416 01:20:19.759156   69035 main.go:141] libmachine: (auto-381983)   <features>
	I0416 01:20:19.759168   69035 main.go:141] libmachine: (auto-381983)     <acpi/>
	I0416 01:20:19.759174   69035 main.go:141] libmachine: (auto-381983)     <apic/>
	I0416 01:20:19.759180   69035 main.go:141] libmachine: (auto-381983)     <pae/>
	I0416 01:20:19.759188   69035 main.go:141] libmachine: (auto-381983)     
	I0416 01:20:19.759200   69035 main.go:141] libmachine: (auto-381983)   </features>
	I0416 01:20:19.759211   69035 main.go:141] libmachine: (auto-381983)   <cpu mode='host-passthrough'>
	I0416 01:20:19.759219   69035 main.go:141] libmachine: (auto-381983)   
	I0416 01:20:19.759227   69035 main.go:141] libmachine: (auto-381983)   </cpu>
	I0416 01:20:19.759244   69035 main.go:141] libmachine: (auto-381983)   <os>
	I0416 01:20:19.759263   69035 main.go:141] libmachine: (auto-381983)     <type>hvm</type>
	I0416 01:20:19.759276   69035 main.go:141] libmachine: (auto-381983)     <boot dev='cdrom'/>
	I0416 01:20:19.759287   69035 main.go:141] libmachine: (auto-381983)     <boot dev='hd'/>
	I0416 01:20:19.759298   69035 main.go:141] libmachine: (auto-381983)     <bootmenu enable='no'/>
	I0416 01:20:19.759308   69035 main.go:141] libmachine: (auto-381983)   </os>
	I0416 01:20:19.759317   69035 main.go:141] libmachine: (auto-381983)   <devices>
	I0416 01:20:19.759334   69035 main.go:141] libmachine: (auto-381983)     <disk type='file' device='cdrom'>
	I0416 01:20:19.759352   69035 main.go:141] libmachine: (auto-381983)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/auto-381983/boot2docker.iso'/>
	I0416 01:20:19.759365   69035 main.go:141] libmachine: (auto-381983)       <target dev='hdc' bus='scsi'/>
	I0416 01:20:19.759374   69035 main.go:141] libmachine: (auto-381983)       <readonly/>
	I0416 01:20:19.759381   69035 main.go:141] libmachine: (auto-381983)     </disk>
	I0416 01:20:19.759395   69035 main.go:141] libmachine: (auto-381983)     <disk type='file' device='disk'>
	I0416 01:20:19.759420   69035 main.go:141] libmachine: (auto-381983)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 01:20:19.759452   69035 main.go:141] libmachine: (auto-381983)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/auto-381983/auto-381983.rawdisk'/>
	I0416 01:20:19.759465   69035 main.go:141] libmachine: (auto-381983)       <target dev='hda' bus='virtio'/>
	I0416 01:20:19.759477   69035 main.go:141] libmachine: (auto-381983)     </disk>
	I0416 01:20:19.759486   69035 main.go:141] libmachine: (auto-381983)     <interface type='network'>
	I0416 01:20:19.759500   69035 main.go:141] libmachine: (auto-381983)       <source network='mk-auto-381983'/>
	I0416 01:20:19.759514   69035 main.go:141] libmachine: (auto-381983)       <model type='virtio'/>
	I0416 01:20:19.759525   69035 main.go:141] libmachine: (auto-381983)     </interface>
	I0416 01:20:19.759538   69035 main.go:141] libmachine: (auto-381983)     <interface type='network'>
	I0416 01:20:19.759549   69035 main.go:141] libmachine: (auto-381983)       <source network='default'/>
	I0416 01:20:19.759567   69035 main.go:141] libmachine: (auto-381983)       <model type='virtio'/>
	I0416 01:20:19.759584   69035 main.go:141] libmachine: (auto-381983)     </interface>
	I0416 01:20:19.759600   69035 main.go:141] libmachine: (auto-381983)     <serial type='pty'>
	I0416 01:20:19.759611   69035 main.go:141] libmachine: (auto-381983)       <target port='0'/>
	I0416 01:20:19.759627   69035 main.go:141] libmachine: (auto-381983)     </serial>
	I0416 01:20:19.759638   69035 main.go:141] libmachine: (auto-381983)     <console type='pty'>
	I0416 01:20:19.759649   69035 main.go:141] libmachine: (auto-381983)       <target type='serial' port='0'/>
	I0416 01:20:19.759659   69035 main.go:141] libmachine: (auto-381983)     </console>
	I0416 01:20:19.759671   69035 main.go:141] libmachine: (auto-381983)     <rng model='virtio'>
	I0416 01:20:19.759684   69035 main.go:141] libmachine: (auto-381983)       <backend model='random'>/dev/random</backend>
	I0416 01:20:19.759693   69035 main.go:141] libmachine: (auto-381983)     </rng>
	I0416 01:20:19.759701   69035 main.go:141] libmachine: (auto-381983)     
	I0416 01:20:19.759705   69035 main.go:141] libmachine: (auto-381983)     
	I0416 01:20:19.759710   69035 main.go:141] libmachine: (auto-381983)   </devices>
	I0416 01:20:19.759715   69035 main.go:141] libmachine: (auto-381983) </domain>
	I0416 01:20:19.759719   69035 main.go:141] libmachine: (auto-381983) 
	I0416 01:20:19.764166   69035 main.go:141] libmachine: (auto-381983) DBG | domain auto-381983 has defined MAC address 52:54:00:13:78:fd in network default
	I0416 01:20:19.764800   69035 main.go:141] libmachine: (auto-381983) Ensuring networks are active...
	I0416 01:20:19.764833   69035 main.go:141] libmachine: (auto-381983) DBG | domain auto-381983 has defined MAC address 52:54:00:1a:b7:9b in network mk-auto-381983
	I0416 01:20:19.765626   69035 main.go:141] libmachine: (auto-381983) Ensuring network default is active
	I0416 01:20:19.765991   69035 main.go:141] libmachine: (auto-381983) Ensuring network mk-auto-381983 is active
	I0416 01:20:19.766585   69035 main.go:141] libmachine: (auto-381983) Getting domain xml...
	I0416 01:20:19.767306   69035 main.go:141] libmachine: (auto-381983) Creating domain...
	I0416 01:20:21.020612   69035 main.go:141] libmachine: (auto-381983) Waiting to get IP...
	I0416 01:20:21.021593   69035 main.go:141] libmachine: (auto-381983) DBG | domain auto-381983 has defined MAC address 52:54:00:1a:b7:9b in network mk-auto-381983
	I0416 01:20:21.022087   69035 main.go:141] libmachine: (auto-381983) DBG | unable to find current IP address of domain auto-381983 in network mk-auto-381983
	I0416 01:20:21.022116   69035 main.go:141] libmachine: (auto-381983) DBG | I0416 01:20:21.022046   69058 retry.go:31] will retry after 254.949572ms: waiting for machine to come up
	I0416 01:20:21.278432   69035 main.go:141] libmachine: (auto-381983) DBG | domain auto-381983 has defined MAC address 52:54:00:1a:b7:9b in network mk-auto-381983
	I0416 01:20:21.278960   69035 main.go:141] libmachine: (auto-381983) DBG | unable to find current IP address of domain auto-381983 in network mk-auto-381983
	I0416 01:20:21.279015   69035 main.go:141] libmachine: (auto-381983) DBG | I0416 01:20:21.278904   69058 retry.go:31] will retry after 346.77195ms: waiting for machine to come up
	I0416 01:20:21.627538   69035 main.go:141] libmachine: (auto-381983) DBG | domain auto-381983 has defined MAC address 52:54:00:1a:b7:9b in network mk-auto-381983
	I0416 01:20:21.628081   69035 main.go:141] libmachine: (auto-381983) DBG | unable to find current IP address of domain auto-381983 in network mk-auto-381983
	I0416 01:20:21.628113   69035 main.go:141] libmachine: (auto-381983) DBG | I0416 01:20:21.628009   69058 retry.go:31] will retry after 296.842699ms: waiting for machine to come up
	I0416 01:20:21.926432   69035 main.go:141] libmachine: (auto-381983) DBG | domain auto-381983 has defined MAC address 52:54:00:1a:b7:9b in network mk-auto-381983
	I0416 01:20:21.926844   69035 main.go:141] libmachine: (auto-381983) DBG | unable to find current IP address of domain auto-381983 in network mk-auto-381983
	I0416 01:20:21.926874   69035 main.go:141] libmachine: (auto-381983) DBG | I0416 01:20:21.926791   69058 retry.go:31] will retry after 434.941059ms: waiting for machine to come up
	I0416 01:20:22.363408   69035 main.go:141] libmachine: (auto-381983) DBG | domain auto-381983 has defined MAC address 52:54:00:1a:b7:9b in network mk-auto-381983
	I0416 01:20:22.363981   69035 main.go:141] libmachine: (auto-381983) DBG | unable to find current IP address of domain auto-381983 in network mk-auto-381983
	I0416 01:20:22.364012   69035 main.go:141] libmachine: (auto-381983) DBG | I0416 01:20:22.363944   69058 retry.go:31] will retry after 665.771991ms: waiting for machine to come up
	I0416 01:20:23.031349   69035 main.go:141] libmachine: (auto-381983) DBG | domain auto-381983 has defined MAC address 52:54:00:1a:b7:9b in network mk-auto-381983
	I0416 01:20:23.031773   69035 main.go:141] libmachine: (auto-381983) DBG | unable to find current IP address of domain auto-381983 in network mk-auto-381983
	I0416 01:20:23.031807   69035 main.go:141] libmachine: (auto-381983) DBG | I0416 01:20:23.031729   69058 retry.go:31] will retry after 920.910687ms: waiting for machine to come up
	I0416 01:20:23.953858   69035 main.go:141] libmachine: (auto-381983) DBG | domain auto-381983 has defined MAC address 52:54:00:1a:b7:9b in network mk-auto-381983
	I0416 01:20:23.954358   69035 main.go:141] libmachine: (auto-381983) DBG | unable to find current IP address of domain auto-381983 in network mk-auto-381983
	I0416 01:20:23.954396   69035 main.go:141] libmachine: (auto-381983) DBG | I0416 01:20:23.954305   69058 retry.go:31] will retry after 1.021877705s: waiting for machine to come up
	I0416 01:20:24.977600   69035 main.go:141] libmachine: (auto-381983) DBG | domain auto-381983 has defined MAC address 52:54:00:1a:b7:9b in network mk-auto-381983
	I0416 01:20:24.977929   69035 main.go:141] libmachine: (auto-381983) DBG | unable to find current IP address of domain auto-381983 in network mk-auto-381983
	I0416 01:20:24.977953   69035 main.go:141] libmachine: (auto-381983) DBG | I0416 01:20:24.977895   69058 retry.go:31] will retry after 1.102964142s: waiting for machine to come up
	I0416 01:20:26.083092   69035 main.go:141] libmachine: (auto-381983) DBG | domain auto-381983 has defined MAC address 52:54:00:1a:b7:9b in network mk-auto-381983
	I0416 01:20:26.083585   69035 main.go:141] libmachine: (auto-381983) DBG | unable to find current IP address of domain auto-381983 in network mk-auto-381983
	I0416 01:20:26.083612   69035 main.go:141] libmachine: (auto-381983) DBG | I0416 01:20:26.083528   69058 retry.go:31] will retry after 1.612290783s: waiting for machine to come up
	I0416 01:20:27.698208   69035 main.go:141] libmachine: (auto-381983) DBG | domain auto-381983 has defined MAC address 52:54:00:1a:b7:9b in network mk-auto-381983
	I0416 01:20:27.698643   69035 main.go:141] libmachine: (auto-381983) DBG | unable to find current IP address of domain auto-381983 in network mk-auto-381983
	I0416 01:20:27.698669   69035 main.go:141] libmachine: (auto-381983) DBG | I0416 01:20:27.698622   69058 retry.go:31] will retry after 2.094553864s: waiting for machine to come up
	
	
	==> CRI-O <==
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.791349737Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230432791313598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99978,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1258d804-6772-44bb-9d06-fc5a7f0094cc name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.792153342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19df111d-8f1b-4254-9970-c85e009eaf63 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.792205708Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19df111d-8f1b-4254-9970-c85e009eaf63 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.792390153Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4084ca3da80ddb16e306dcabb7c20593f8e97f33727b62127f188994ad25adde,PodSandboxId:7c533eb612bcfeb3e328c5ebae02e2433479a2a2952017e65215e6900b611a08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229513798457587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9ac9c93-0e50-4598-a9c4-a12e4ff14063,},Annotations:map[string]string{io.kubernetes.container.hash: 34478f80,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04bf17c2ef31ccb5b6baa0c6ca8f18c10429b2b41a05562d35cb3e7624d425b0,PodSandboxId:a45b6f26d15ebd8db3ab9a3dd6dbc66fb2fd1a68b1fbc3d725f502fec0621958,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229513484300883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p62sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36768eb2-2a22-48e1-b271-f262aa64e014,},Annotations:map[string]string{io.kubernetes.container.hash: 83dc39b1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f47059ece58130b68e11d0faa581f7b336a91743d8365eb9dac84da7aff6d0,PodSandboxId:8b22609e17e2d4ddf269e30e8ed22ed2d44a4e963d9dc02bd3f00230bf122ea8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229513380146870,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2b5ht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8
d48a4c-6efd-409a-98be-3ec5bf639470,},Annotations:map[string]string{io.kubernetes.container.hash: 9eeac67b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b028f56375a963d3290fb3ca9532c765c119e48afb0243eca99823653744037,PodSandboxId:dd56032ee4f5665a6f2e99c9745b37a3e5121edf119f16172b34294b98f3f297,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:
1713229512641915406,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cjlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4d9303-8c08-4385-a6b9-63dda0d9a274,},Annotations:map[string]string{io.kubernetes.container.hash: efc90ea9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c03a58ac3d73ae22e09299e52c1b1cea91a8f532c2e83efe6db26b0cf0fcd9b6,PodSandboxId:fd7e62b052cdc105a5b72766b74dbb31fe56ea113ab1de728e762a29619ee05c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713229493072332354,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18a9fc8e3ab1c697889b081fefcfa178,},Annotations:map[string]string{io.kubernetes.container.hash: 258756dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b895d3cc11f002a93248562e7f584ce2a601717f3f84b5c05201ece6ad116e4e,PodSandboxId:98fbeb3707a7aa7fcd2de3d49281c8543a900ae8b7715291526911fb3b9d1feb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713229493057979886,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7184d6eaf0b500beb6b7cea960d1905,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460245770a3129b38002697725b12fa2bbe8dba33dd691938054fb6a1cb63f63,PodSandboxId:4ef695f642f6c7a59106829ebc80d9c5ba1aa4a23e501216d835de606105ccd1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713229493027192998,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d890064dc3b1b546cf082926e2564845,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cece99507aaa37e50274debcdc6deb652e50017ded358129d85704ff474638e,PodSandboxId:eb4d3cf79f6902ec327d2bcc3aa599c22fc15ab12f64bbecf47654ceb96365e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713229492979142873,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cbdec3ea486f052b898933d08258cc4,},Annotations:map[string]string{io.kubernetes.container.hash: ebb91f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19df111d-8f1b-4254-9970-c85e009eaf63 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.836354231Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b593d99a-d64f-42a3-92ed-9b80ab03423a name=/runtime.v1.RuntimeService/Version
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.836452797Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b593d99a-d64f-42a3-92ed-9b80ab03423a name=/runtime.v1.RuntimeService/Version
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.837889340Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=819f14f1-f723-4c43-9cfc-16999101f03f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.838515835Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230432838483696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99978,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=819f14f1-f723-4c43-9cfc-16999101f03f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.839465552Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a037ad0-a17b-438a-b4c5-3369625818c6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.839518812Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a037ad0-a17b-438a-b4c5-3369625818c6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.839815087Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4084ca3da80ddb16e306dcabb7c20593f8e97f33727b62127f188994ad25adde,PodSandboxId:7c533eb612bcfeb3e328c5ebae02e2433479a2a2952017e65215e6900b611a08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229513798457587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9ac9c93-0e50-4598-a9c4-a12e4ff14063,},Annotations:map[string]string{io.kubernetes.container.hash: 34478f80,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04bf17c2ef31ccb5b6baa0c6ca8f18c10429b2b41a05562d35cb3e7624d425b0,PodSandboxId:a45b6f26d15ebd8db3ab9a3dd6dbc66fb2fd1a68b1fbc3d725f502fec0621958,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229513484300883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p62sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36768eb2-2a22-48e1-b271-f262aa64e014,},Annotations:map[string]string{io.kubernetes.container.hash: 83dc39b1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f47059ece58130b68e11d0faa581f7b336a91743d8365eb9dac84da7aff6d0,PodSandboxId:8b22609e17e2d4ddf269e30e8ed22ed2d44a4e963d9dc02bd3f00230bf122ea8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229513380146870,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2b5ht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8
d48a4c-6efd-409a-98be-3ec5bf639470,},Annotations:map[string]string{io.kubernetes.container.hash: 9eeac67b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b028f56375a963d3290fb3ca9532c765c119e48afb0243eca99823653744037,PodSandboxId:dd56032ee4f5665a6f2e99c9745b37a3e5121edf119f16172b34294b98f3f297,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:
1713229512641915406,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cjlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4d9303-8c08-4385-a6b9-63dda0d9a274,},Annotations:map[string]string{io.kubernetes.container.hash: efc90ea9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c03a58ac3d73ae22e09299e52c1b1cea91a8f532c2e83efe6db26b0cf0fcd9b6,PodSandboxId:fd7e62b052cdc105a5b72766b74dbb31fe56ea113ab1de728e762a29619ee05c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713229493072332354,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18a9fc8e3ab1c697889b081fefcfa178,},Annotations:map[string]string{io.kubernetes.container.hash: 258756dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b895d3cc11f002a93248562e7f584ce2a601717f3f84b5c05201ece6ad116e4e,PodSandboxId:98fbeb3707a7aa7fcd2de3d49281c8543a900ae8b7715291526911fb3b9d1feb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713229493057979886,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7184d6eaf0b500beb6b7cea960d1905,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460245770a3129b38002697725b12fa2bbe8dba33dd691938054fb6a1cb63f63,PodSandboxId:4ef695f642f6c7a59106829ebc80d9c5ba1aa4a23e501216d835de606105ccd1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713229493027192998,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d890064dc3b1b546cf082926e2564845,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cece99507aaa37e50274debcdc6deb652e50017ded358129d85704ff474638e,PodSandboxId:eb4d3cf79f6902ec327d2bcc3aa599c22fc15ab12f64bbecf47654ceb96365e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713229492979142873,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cbdec3ea486f052b898933d08258cc4,},Annotations:map[string]string{io.kubernetes.container.hash: ebb91f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a037ad0-a17b-438a-b4c5-3369625818c6 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.882279578Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=060a5de2-2137-4eb4-af98-2b77d19fb841 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.882802139Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=060a5de2-2137-4eb4-af98-2b77d19fb841 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.883970208Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df021637-1186-45c8-b97e-4afaeb922d4f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.884366209Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230432884343841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99978,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df021637-1186-45c8-b97e-4afaeb922d4f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.885044496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=199ef638-2c07-43ae-a157-ec1f9a633c47 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.885120186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=199ef638-2c07-43ae-a157-ec1f9a633c47 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.885309090Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4084ca3da80ddb16e306dcabb7c20593f8e97f33727b62127f188994ad25adde,PodSandboxId:7c533eb612bcfeb3e328c5ebae02e2433479a2a2952017e65215e6900b611a08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229513798457587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9ac9c93-0e50-4598-a9c4-a12e4ff14063,},Annotations:map[string]string{io.kubernetes.container.hash: 34478f80,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04bf17c2ef31ccb5b6baa0c6ca8f18c10429b2b41a05562d35cb3e7624d425b0,PodSandboxId:a45b6f26d15ebd8db3ab9a3dd6dbc66fb2fd1a68b1fbc3d725f502fec0621958,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229513484300883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p62sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36768eb2-2a22-48e1-b271-f262aa64e014,},Annotations:map[string]string{io.kubernetes.container.hash: 83dc39b1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f47059ece58130b68e11d0faa581f7b336a91743d8365eb9dac84da7aff6d0,PodSandboxId:8b22609e17e2d4ddf269e30e8ed22ed2d44a4e963d9dc02bd3f00230bf122ea8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229513380146870,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2b5ht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8
d48a4c-6efd-409a-98be-3ec5bf639470,},Annotations:map[string]string{io.kubernetes.container.hash: 9eeac67b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b028f56375a963d3290fb3ca9532c765c119e48afb0243eca99823653744037,PodSandboxId:dd56032ee4f5665a6f2e99c9745b37a3e5121edf119f16172b34294b98f3f297,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:
1713229512641915406,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cjlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4d9303-8c08-4385-a6b9-63dda0d9a274,},Annotations:map[string]string{io.kubernetes.container.hash: efc90ea9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c03a58ac3d73ae22e09299e52c1b1cea91a8f532c2e83efe6db26b0cf0fcd9b6,PodSandboxId:fd7e62b052cdc105a5b72766b74dbb31fe56ea113ab1de728e762a29619ee05c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713229493072332354,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18a9fc8e3ab1c697889b081fefcfa178,},Annotations:map[string]string{io.kubernetes.container.hash: 258756dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b895d3cc11f002a93248562e7f584ce2a601717f3f84b5c05201ece6ad116e4e,PodSandboxId:98fbeb3707a7aa7fcd2de3d49281c8543a900ae8b7715291526911fb3b9d1feb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713229493057979886,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7184d6eaf0b500beb6b7cea960d1905,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460245770a3129b38002697725b12fa2bbe8dba33dd691938054fb6a1cb63f63,PodSandboxId:4ef695f642f6c7a59106829ebc80d9c5ba1aa4a23e501216d835de606105ccd1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713229493027192998,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d890064dc3b1b546cf082926e2564845,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cece99507aaa37e50274debcdc6deb652e50017ded358129d85704ff474638e,PodSandboxId:eb4d3cf79f6902ec327d2bcc3aa599c22fc15ab12f64bbecf47654ceb96365e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713229492979142873,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cbdec3ea486f052b898933d08258cc4,},Annotations:map[string]string{io.kubernetes.container.hash: ebb91f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=199ef638-2c07-43ae-a157-ec1f9a633c47 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.924912018Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50bec5a8-12c2-4afa-ba92-e5f5de8ded43 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.925005246Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50bec5a8-12c2-4afa-ba92-e5f5de8ded43 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.926490211Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8f1986b-fc5d-4bc1-9f51-03bbcaa35317 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.926897940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230432926878151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:99978,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8f1986b-fc5d-4bc1-9f51-03bbcaa35317 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.927761138Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f4aa5fcb-e2a8-4c1a-a8a8-d7d9ff7eeeb4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.927894702Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f4aa5fcb-e2a8-4c1a-a8a8-d7d9ff7eeeb4 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:32 no-preload-572602 crio[722]: time="2024-04-16 01:20:32.928511777Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4084ca3da80ddb16e306dcabb7c20593f8e97f33727b62127f188994ad25adde,PodSandboxId:7c533eb612bcfeb3e328c5ebae02e2433479a2a2952017e65215e6900b611a08,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229513798457587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b9ac9c93-0e50-4598-a9c4-a12e4ff14063,},Annotations:map[string]string{io.kubernetes.container.hash: 34478f80,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04bf17c2ef31ccb5b6baa0c6ca8f18c10429b2b41a05562d35cb3e7624d425b0,PodSandboxId:a45b6f26d15ebd8db3ab9a3dd6dbc66fb2fd1a68b1fbc3d725f502fec0621958,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229513484300883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-p62sn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36768eb2-2a22-48e1-b271-f262aa64e014,},Annotations:map[string]string{io.kubernetes.container.hash: 83dc39b1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92f47059ece58130b68e11d0faa581f7b336a91743d8365eb9dac84da7aff6d0,PodSandboxId:8b22609e17e2d4ddf269e30e8ed22ed2d44a4e963d9dc02bd3f00230bf122ea8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229513380146870,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-2b5ht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8
d48a4c-6efd-409a-98be-3ec5bf639470,},Annotations:map[string]string{io.kubernetes.container.hash: 9eeac67b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b028f56375a963d3290fb3ca9532c765c119e48afb0243eca99823653744037,PodSandboxId:dd56032ee4f5665a6f2e99c9745b37a3e5121edf119f16172b34294b98f3f297,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e,State:CONTAINER_RUNNING,CreatedAt:
1713229512641915406,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6cjlc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c4d9303-8c08-4385-a6b9-63dda0d9a274,},Annotations:map[string]string{io.kubernetes.container.hash: efc90ea9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c03a58ac3d73ae22e09299e52c1b1cea91a8f532c2e83efe6db26b0cf0fcd9b6,PodSandboxId:fd7e62b052cdc105a5b72766b74dbb31fe56ea113ab1de728e762a29619ee05c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713229493072332354,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18a9fc8e3ab1c697889b081fefcfa178,},Annotations:map[string]string{io.kubernetes.container.hash: 258756dc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b895d3cc11f002a93248562e7f584ce2a601717f3f84b5c05201ece6ad116e4e,PodSandboxId:98fbeb3707a7aa7fcd2de3d49281c8543a900ae8b7715291526911fb3b9d1feb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6,State:CONTAINER_RUNNING,CreatedAt:1713229493057979886,Labels:map[string]string{io.kubernetes.container.nam
e: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7184d6eaf0b500beb6b7cea960d1905,},Annotations:map[string]string{io.kubernetes.container.hash: 1e34585d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:460245770a3129b38002697725b12fa2bbe8dba33dd691938054fb6a1cb63f63,PodSandboxId:4ef695f642f6c7a59106829ebc80d9c5ba1aa4a23e501216d835de606105ccd1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b,State:CONTAINER_RUNNING,CreatedAt:1713229493027192998,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d890064dc3b1b546cf082926e2564845,},Annotations:map[string]string{io.kubernetes.container.hash: c80ec39b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cece99507aaa37e50274debcdc6deb652e50017ded358129d85704ff474638e,PodSandboxId:eb4d3cf79f6902ec327d2bcc3aa599c22fc15ab12f64bbecf47654ceb96365e7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1,State:CONTAINER_RUNNING,CreatedAt:1713229492979142873,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-572602,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cbdec3ea486f052b898933d08258cc4,},Annotations:map[string]string{io.kubernetes.container.hash: ebb91f0b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f4aa5fcb-e2a8-4c1a-a8a8-d7d9ff7eeeb4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4084ca3da80dd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   7c533eb612bcf       storage-provisioner
	04bf17c2ef31c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   a45b6f26d15eb       coredns-7db6d8ff4d-p62sn
	92f47059ece58       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   15 minutes ago      Running             coredns                   0                   8b22609e17e2d       coredns-7db6d8ff4d-2b5ht
	4b028f56375a9       35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e   15 minutes ago      Running             kube-proxy                0                   dd56032ee4f56       kube-proxy-6cjlc
	c03a58ac3d73a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   15 minutes ago      Running             etcd                      2                   fd7e62b052cdc       etcd-no-preload-572602
	b895d3cc11f00       461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6   15 minutes ago      Running             kube-scheduler            2                   98fbeb3707a7a       kube-scheduler-no-preload-572602
	460245770a312       ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b   15 minutes ago      Running             kube-controller-manager   2                   4ef695f642f6c       kube-controller-manager-no-preload-572602
	8cece99507aaa       65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1   15 minutes ago      Running             kube-apiserver            2                   eb4d3cf79f690       kube-apiserver-no-preload-572602
	
	
	==> coredns [04bf17c2ef31ccb5b6baa0c6ca8f18c10429b2b41a05562d35cb3e7624d425b0] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [92f47059ece58130b68e11d0faa581f7b336a91743d8365eb9dac84da7aff6d0] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-572602
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-572602
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=no-preload-572602
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T01_04_59_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 01:04:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-572602
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 01:20:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 01:15:30 +0000   Tue, 16 Apr 2024 01:04:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 01:15:30 +0000   Tue, 16 Apr 2024 01:04:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 01:15:30 +0000   Tue, 16 Apr 2024 01:04:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 01:15:30 +0000   Tue, 16 Apr 2024 01:04:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.121
	  Hostname:    no-preload-572602
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 50d747e62b974d8286588f595ee1d471
	  System UUID:                50d747e6-2b97-4d82-8658-8f595ee1d471
	  Boot ID:                    10322727-cc02-48ec-b8d2-a3f54c053fd9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.0-rc.2
	  Kube-Proxy Version:         v1.30.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-2b5ht                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-p62sn                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-no-preload-572602                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-no-preload-572602             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-no-preload-572602    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-6cjlc                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-no-preload-572602             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-569cc877fc-5j5rc              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node no-preload-572602 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node no-preload-572602 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node no-preload-572602 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node no-preload-572602 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node no-preload-572602 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node no-preload-572602 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node no-preload-572602 event: Registered Node no-preload-572602 in Controller
	
	
	==> dmesg <==
	[  +0.040586] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.527107] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.695817] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.626978] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.483712] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.061192] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057831] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.192698] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.135403] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.299347] systemd-fstab-generator[708]: Ignoring "noauto" option for root device
	[ +16.182072] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.068039] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.474627] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[Apr16 01:00] kauditd_printk_skb: 100 callbacks suppressed
	[  +7.322466] kauditd_printk_skb: 50 callbacks suppressed
	[  +6.834560] kauditd_printk_skb: 24 callbacks suppressed
	[Apr16 01:04] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.489037] systemd-fstab-generator[4022]: Ignoring "noauto" option for root device
	[  +4.724659] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.839101] systemd-fstab-generator[4352]: Ignoring "noauto" option for root device
	[Apr16 01:05] systemd-fstab-generator[4544]: Ignoring "noauto" option for root device
	[  +0.120603] kauditd_printk_skb: 14 callbacks suppressed
	[Apr16 01:06] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [c03a58ac3d73ae22e09299e52c1b1cea91a8f532c2e83efe6db26b0cf0fcd9b6] <==
	{"level":"info","ts":"2024-04-16T01:04:53.517296Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T01:04:54.338381Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-16T01:04:54.338437Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-16T01:04:54.338507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 received MsgPreVoteResp from cbdf275f553df7c2 at term 1"}
	{"level":"info","ts":"2024-04-16T01:04:54.338529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 became candidate at term 2"}
	{"level":"info","ts":"2024-04-16T01:04:54.338536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 received MsgVoteResp from cbdf275f553df7c2 at term 2"}
	{"level":"info","ts":"2024-04-16T01:04:54.338612Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cbdf275f553df7c2 became leader at term 2"}
	{"level":"info","ts":"2024-04-16T01:04:54.338624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cbdf275f553df7c2 elected leader cbdf275f553df7c2 at term 2"}
	{"level":"info","ts":"2024-04-16T01:04:54.340095Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:04:54.341642Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"cbdf275f553df7c2","local-member-attributes":"{Name:no-preload-572602 ClientURLs:[https://192.168.39.121:2379]}","request-path":"/0/members/cbdf275f553df7c2/attributes","cluster-id":"6f38b6947d3f1f22","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T01:04:54.341751Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T01:04:54.341826Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T01:04:54.34592Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.121:2379"}
	{"level":"info","ts":"2024-04-16T01:04:54.346251Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f38b6947d3f1f22","local-member-id":"cbdf275f553df7c2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:04:54.34636Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:04:54.346409Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:04:54.34787Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T01:04:54.354627Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T01:04:54.354666Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T01:14:54.389384Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":714}
	{"level":"info","ts":"2024-04-16T01:14:54.398834Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":714,"took":"9.039393ms","hash":3672372912,"current-db-size-bytes":2289664,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2289664,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-04-16T01:14:54.398908Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3672372912,"revision":714,"compact-revision":-1}
	{"level":"info","ts":"2024-04-16T01:19:54.397833Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":957}
	{"level":"info","ts":"2024-04-16T01:19:54.402718Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":957,"took":"4.371267ms","hash":419651351,"current-db-size-bytes":2289664,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1613824,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-16T01:19:54.40278Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":419651351,"revision":957,"compact-revision":714}
	
	
	==> kernel <==
	 01:20:33 up 21 min,  0 users,  load average: 0.09, 0.13, 0.13
	Linux no-preload-572602 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8cece99507aaa37e50274debcdc6deb652e50017ded358129d85704ff474638e] <==
	I0416 01:14:56.771312       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:15:56.770778       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:15:56.771066       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 01:15:56.771095       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:15:56.771931       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:15:56.772001       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 01:15:56.773096       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:17:56.771447       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:17:56.771899       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 01:17:56.771936       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:17:56.773700       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:17:56.773763       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 01:17:56.773789       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:19:55.778262       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:19:55.778744       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0416 01:19:56.779506       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:19:56.779614       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 01:19:56.779624       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:19:56.779673       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:19:56.779742       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 01:19:56.780912       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [460245770a3129b38002697725b12fa2bbe8dba33dd691938054fb6a1cb63f63] <==
	I0416 01:14:41.814874       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:15:11.346434       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:15:11.822801       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:15:41.354860       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:15:41.832942       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0416 01:16:00.602922       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="158.49µs"
	E0416 01:16:11.360803       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:16:11.598469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-569cc877fc" duration="67.115µs"
	I0416 01:16:11.841030       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:16:41.365642       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:16:41.849699       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:17:11.371877       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:17:11.858415       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:17:41.378239       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:17:41.868632       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:18:11.385221       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:18:11.877200       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:18:41.390494       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:18:41.886034       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:19:11.397610       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:19:11.894721       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:19:41.404681       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:19:41.904856       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:20:11.410355       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:20:11.913427       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4b028f56375a963d3290fb3ca9532c765c119e48afb0243eca99823653744037] <==
	I0416 01:05:13.064125       1 server_linux.go:69] "Using iptables proxy"
	I0416 01:05:13.082146       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.121"]
	I0416 01:05:13.166669       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0416 01:05:13.166707       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 01:05:13.166723       1 server_linux.go:165] "Using iptables Proxier"
	I0416 01:05:13.183080       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 01:05:13.183350       1 server.go:872] "Version info" version="v1.30.0-rc.2"
	I0416 01:05:13.183685       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 01:05:13.185487       1 config.go:192] "Starting service config controller"
	I0416 01:05:13.185632       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0416 01:05:13.185722       1 config.go:101] "Starting endpoint slice config controller"
	I0416 01:05:13.185770       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0416 01:05:13.187754       1 config.go:319] "Starting node config controller"
	I0416 01:05:13.187821       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0416 01:05:13.287696       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0416 01:05:13.287742       1 shared_informer.go:320] Caches are synced for service config
	I0416 01:05:13.294315       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b895d3cc11f002a93248562e7f584ce2a601717f3f84b5c05201ece6ad116e4e] <==
	W0416 01:04:55.798086       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 01:04:55.798114       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 01:04:55.798227       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 01:04:55.798321       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 01:04:56.615530       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 01:04:56.615681       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 01:04:56.727317       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 01:04:56.727413       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0416 01:04:56.747821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 01:04:56.748181       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 01:04:56.866479       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 01:04:56.866602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 01:04:57.009274       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 01:04:57.009330       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0416 01:04:57.020989       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 01:04:57.021044       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 01:04:57.070678       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 01:04:57.070737       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 01:04:57.100865       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0416 01:04:57.100925       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0416 01:04:57.156135       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 01:04:57.156204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 01:04:57.314064       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 01:04:57.314811       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0416 01:04:59.279250       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 01:17:58 no-preload-572602 kubelet[4359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 01:17:58 no-preload-572602 kubelet[4359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 01:17:58 no-preload-572602 kubelet[4359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 01:17:58 no-preload-572602 kubelet[4359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 01:18:09 no-preload-572602 kubelet[4359]: E0416 01:18:09.583015    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:18:24 no-preload-572602 kubelet[4359]: E0416 01:18:24.583466    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:18:38 no-preload-572602 kubelet[4359]: E0416 01:18:38.582479    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:18:51 no-preload-572602 kubelet[4359]: E0416 01:18:51.581939    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:18:58 no-preload-572602 kubelet[4359]: E0416 01:18:58.596358    4359 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 16 01:18:58 no-preload-572602 kubelet[4359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 01:18:58 no-preload-572602 kubelet[4359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 01:18:58 no-preload-572602 kubelet[4359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 01:18:58 no-preload-572602 kubelet[4359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 01:19:06 no-preload-572602 kubelet[4359]: E0416 01:19:06.582153    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:19:21 no-preload-572602 kubelet[4359]: E0416 01:19:21.583065    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:19:34 no-preload-572602 kubelet[4359]: E0416 01:19:34.583729    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:19:49 no-preload-572602 kubelet[4359]: E0416 01:19:49.583272    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:19:58 no-preload-572602 kubelet[4359]: E0416 01:19:58.600391    4359 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 16 01:19:58 no-preload-572602 kubelet[4359]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 01:19:58 no-preload-572602 kubelet[4359]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 01:19:58 no-preload-572602 kubelet[4359]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 01:19:58 no-preload-572602 kubelet[4359]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 01:20:01 no-preload-572602 kubelet[4359]: E0416 01:20:01.582329    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:20:14 no-preload-572602 kubelet[4359]: E0416 01:20:14.582853    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	Apr 16 01:20:28 no-preload-572602 kubelet[4359]: E0416 01:20:28.583617    4359 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-569cc877fc-5j5rc" podUID="3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782"
	
	
	==> storage-provisioner [4084ca3da80ddb16e306dcabb7c20593f8e97f33727b62127f188994ad25adde] <==
	I0416 01:05:14.033839       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 01:05:14.064061       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 01:05:14.064133       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 01:05:14.080400       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 01:05:14.080682       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-572602_dd85fd78-4f0b-4302-87f3-53cba46d8b5c!
	I0416 01:05:14.082311       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"30217f1a-1bd4-4989-8fbd-f38230eb9a98", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-572602_dd85fd78-4f0b-4302-87f3-53cba46d8b5c became leader
	I0416 01:05:14.181647       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-572602_dd85fd78-4f0b-4302-87f3-53cba46d8b5c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-572602 -n no-preload-572602
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-572602 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-569cc877fc-5j5rc
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-572602 describe pod metrics-server-569cc877fc-5j5rc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-572602 describe pod metrics-server-569cc877fc-5j5rc: exit status 1 (64.252457ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-569cc877fc-5j5rc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-572602 describe pod metrics-server-569cc877fc-5j5rc: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (375.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (543.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-617092 -n embed-certs-617092
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-16 01:23:49.519201586 +0000 UTC m=+6372.035962903
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-617092 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-617092 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (84.732417ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): namespaces "kubernetes-dashboard" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-617092 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-617092 -n embed-certs-617092
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-617092 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-617092 logs -n 25: (2.143282302s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| ssh     | -p calico-381983 sudo                                | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC | 16 Apr 24 01:23 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |                |                     |                     |
	|         | --full --no-pager                                    |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo cat                            | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC | 16 Apr 24 01:23 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo cat                            | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC | 16 Apr 24 01:23 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo                                | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |                |                     |                     |
	|         | --full --no-pager                                    |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo                                | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC | 16 Apr 24 01:23 UTC |
	|         | systemctl cat docker                                 |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo cat                            | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC | 16 Apr 24 01:23 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo docker                         | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC |                     |
	|         | system info                                          |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo                                | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |                |                     |                     |
	|         | --all --full --no-pager                              |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo                                | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC | 16 Apr 24 01:23 UTC |
	|         | systemctl cat cri-docker                             |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo cat                            | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo cat                            | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC | 16 Apr 24 01:23 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo                                | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC | 16 Apr 24 01:23 UTC |
	|         | cri-dockerd --version                                |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo                                | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC |                     |
	|         | systemctl status containerd                          |                           |         |                |                     |                     |
	|         | --all --full --no-pager                              |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo                                | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC | 16 Apr 24 01:23 UTC |
	|         | systemctl cat containerd                             |                           |         |                |                     |                     |
	|         | --no-pager                                           |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo cat                            | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC | 16 Apr 24 01:23 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo cat                            | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC | 16 Apr 24 01:23 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo                                | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC | 16 Apr 24 01:23 UTC |
	|         | containerd config dump                               |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo                                | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC | 16 Apr 24 01:23 UTC |
	|         | systemctl status crio --all                          |                           |         |                |                     |                     |
	|         | --full --no-pager                                    |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo                                | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC | 16 Apr 24 01:23 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo find                           | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC | 16 Apr 24 01:23 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |                |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |                |                     |                     |
	| ssh     | -p calico-381983 sudo crio                           | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC | 16 Apr 24 01:23 UTC |
	|         | config                                               |                           |         |                |                     |                     |
	| delete  | -p calico-381983                                     | calico-381983             | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC | 16 Apr 24 01:23 UTC |
	| start   | -p flannel-381983                                    | flannel-381983            | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC |                     |
	|         | --memory=3072                                        |                           |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |                |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |                |                     |                     |
	|         | --cni=flannel --driver=kvm2                          |                           |         |                |                     |                     |
	|         | --container-runtime=crio                             |                           |         |                |                     |                     |
	| ssh     | -p custom-flannel-381983 pgrep                       | custom-flannel-381983     | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC | 16 Apr 24 01:23 UTC |
	|         | -a kubelet                                           |                           |         |                |                     |                     |
	| ssh     | -p enable-default-cni-381983                         | enable-default-cni-381983 | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:23 UTC |                     |
	|         | pgrep -a kubelet                                     |                           |         |                |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 01:23:10
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 01:23:10.901971   75162 out.go:291] Setting OutFile to fd 1 ...
	I0416 01:23:10.902125   75162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 01:23:10.902137   75162 out.go:304] Setting ErrFile to fd 2...
	I0416 01:23:10.902143   75162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 01:23:10.902431   75162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 01:23:10.903257   75162 out.go:298] Setting JSON to false
	I0416 01:23:10.904561   75162 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7535,"bootTime":1713223056,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 01:23:10.904621   75162 start.go:139] virtualization: kvm guest
	I0416 01:23:10.907013   75162 out.go:177] * [flannel-381983] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 01:23:10.908668   75162 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 01:23:10.908774   75162 notify.go:220] Checking for updates...
	I0416 01:23:10.911330   75162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 01:23:10.912648   75162 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:23:10.914015   75162 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 01:23:10.915518   75162 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 01:23:10.916858   75162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 01:23:10.918693   75162 config.go:182] Loaded profile config "custom-flannel-381983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:23:10.918805   75162 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:23:10.918900   75162 config.go:182] Loaded profile config "enable-default-cni-381983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:23:10.919010   75162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 01:23:10.955993   75162 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 01:23:10.957436   75162 start.go:297] selected driver: kvm2
	I0416 01:23:10.957451   75162 start.go:901] validating driver "kvm2" against <nil>
	I0416 01:23:10.957461   75162 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 01:23:10.958149   75162 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 01:23:10.958213   75162 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 01:23:10.973933   75162 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 01:23:10.974001   75162 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 01:23:10.974233   75162 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:23:10.974322   75162 cni.go:84] Creating CNI manager for "flannel"
	I0416 01:23:10.974341   75162 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0416 01:23:10.974402   75162 start.go:340] cluster config:
	{Name:flannel-381983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:flannel-381983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:23:10.974518   75162 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 01:23:10.976518   75162 out.go:177] * Starting "flannel-381983" primary control-plane node in "flannel-381983" cluster
	I0416 01:23:06.241989   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:06.742372   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:07.242411   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:07.742187   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:08.241564   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:08.742376   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:09.242371   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:09.742347   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:10.242253   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:10.742363   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:10.977782   75162 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 01:23:10.977821   75162 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 01:23:10.977834   75162 cache.go:56] Caching tarball of preloaded images
	I0416 01:23:10.977909   75162 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 01:23:10.977920   75162 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 01:23:10.978021   75162 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/config.json ...
	I0416 01:23:10.978044   75162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/config.json: {Name:mk8bed86516a6343cb56822e680086e194f289d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:23:10.978186   75162 start.go:360] acquireMachinesLock for flannel-381983: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 01:23:11.606021   75162 start.go:364] duration metric: took 627.815306ms to acquireMachinesLock for "flannel-381983"
	I0416 01:23:11.606090   75162 start.go:93] Provisioning new machine with config: &{Name:flannel-381983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:flannel-381983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:23:11.606188   75162 start.go:125] createHost starting for "" (driver="kvm2")
	I0416 01:23:09.124045   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:09.126747   73348 main.go:141] libmachine: (enable-default-cni-381983) Found IP for machine: 192.168.72.44
	I0416 01:23:09.126767   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has current primary IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:09.126789   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-381983", mac: "52:54:00:d1:3e:71", ip: "192.168.72.44"} in network mk-enable-default-cni-381983
	I0416 01:23:09.126799   73348 main.go:141] libmachine: (enable-default-cni-381983) Reserving static IP address...
	I0416 01:23:09.259100   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | Getting to WaitForSSH function...
	I0416 01:23:09.259126   73348 main.go:141] libmachine: (enable-default-cni-381983) Reserved static IP address: 192.168.72.44
	I0416 01:23:09.259150   73348 main.go:141] libmachine: (enable-default-cni-381983) Waiting for SSH to be available...
	I0416 01:23:09.261897   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:09.262208   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:3e:71", ip: ""} in network mk-enable-default-cni-381983: {Iface:virbr3 ExpiryTime:2024-04-16 02:23:02 +0000 UTC Type:0 Mac:52:54:00:d1:3e:71 Iaid: IPaddr:192.168.72.44 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d1:3e:71}
	I0416 01:23:09.262237   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:09.262518   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | Using SSH client type: external
	I0416 01:23:09.262545   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/enable-default-cni-381983/id_rsa (-rw-------)
	I0416 01:23:09.262794   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.44 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/enable-default-cni-381983/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 01:23:09.262809   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | About to run SSH command:
	I0416 01:23:09.262822   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | exit 0
	I0416 01:23:09.403548   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | SSH cmd err, output: <nil>: 
	I0416 01:23:09.403836   73348 main.go:141] libmachine: (enable-default-cni-381983) KVM machine creation complete!
	I0416 01:23:09.404237   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetConfigRaw
	I0416 01:23:09.404905   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .DriverName
	I0416 01:23:09.405113   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .DriverName
	I0416 01:23:09.405314   73348 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0416 01:23:09.405332   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetState
	I0416 01:23:09.407146   73348 main.go:141] libmachine: Detecting operating system of created instance...
	I0416 01:23:09.407163   73348 main.go:141] libmachine: Waiting for SSH to be available...
	I0416 01:23:09.407172   73348 main.go:141] libmachine: Getting to WaitForSSH function...
	I0416 01:23:09.407182   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHHostname
	I0416 01:23:09.410990   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:09.411394   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:3e:71", ip: ""} in network mk-enable-default-cni-381983: {Iface:virbr3 ExpiryTime:2024-04-16 02:23:02 +0000 UTC Type:0 Mac:52:54:00:d1:3e:71 Iaid: IPaddr:192.168.72.44 Prefix:24 Hostname:enable-default-cni-381983 Clientid:01:52:54:00:d1:3e:71}
	I0416 01:23:09.411426   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:09.411550   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHPort
	I0416 01:23:09.411738   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHKeyPath
	I0416 01:23:09.411905   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHKeyPath
	I0416 01:23:09.412097   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHUsername
	I0416 01:23:09.412401   73348 main.go:141] libmachine: Using SSH client type: native
	I0416 01:23:09.412637   73348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.44 22 <nil> <nil>}
	I0416 01:23:09.412652   73348 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0416 01:23:09.516491   73348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 01:23:09.516518   73348 main.go:141] libmachine: Detecting the provisioner...
	I0416 01:23:09.516528   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHHostname
	I0416 01:23:09.519495   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:09.519905   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:3e:71", ip: ""} in network mk-enable-default-cni-381983: {Iface:virbr3 ExpiryTime:2024-04-16 02:23:02 +0000 UTC Type:0 Mac:52:54:00:d1:3e:71 Iaid: IPaddr:192.168.72.44 Prefix:24 Hostname:enable-default-cni-381983 Clientid:01:52:54:00:d1:3e:71}
	I0416 01:23:09.519931   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:09.520170   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHPort
	I0416 01:23:09.520377   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHKeyPath
	I0416 01:23:09.520556   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHKeyPath
	I0416 01:23:09.520711   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHUsername
	I0416 01:23:09.520879   73348 main.go:141] libmachine: Using SSH client type: native
	I0416 01:23:09.521104   73348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.44 22 <nil> <nil>}
	I0416 01:23:09.521122   73348 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0416 01:23:09.621903   73348 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0416 01:23:09.621956   73348 main.go:141] libmachine: found compatible host: buildroot
	I0416 01:23:09.621967   73348 main.go:141] libmachine: Provisioning with buildroot...
	I0416 01:23:09.621985   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetMachineName
	I0416 01:23:09.622253   73348 buildroot.go:166] provisioning hostname "enable-default-cni-381983"
	I0416 01:23:09.622278   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetMachineName
	I0416 01:23:09.622443   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHHostname
	I0416 01:23:09.625460   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:09.625809   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:3e:71", ip: ""} in network mk-enable-default-cni-381983: {Iface:virbr3 ExpiryTime:2024-04-16 02:23:02 +0000 UTC Type:0 Mac:52:54:00:d1:3e:71 Iaid: IPaddr:192.168.72.44 Prefix:24 Hostname:enable-default-cni-381983 Clientid:01:52:54:00:d1:3e:71}
	I0416 01:23:09.625837   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:09.626015   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHPort
	I0416 01:23:09.626217   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHKeyPath
	I0416 01:23:09.626428   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHKeyPath
	I0416 01:23:09.626610   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHUsername
	I0416 01:23:09.626804   73348 main.go:141] libmachine: Using SSH client type: native
	I0416 01:23:09.627003   73348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.44 22 <nil> <nil>}
	I0416 01:23:09.627020   73348 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-381983 && echo "enable-default-cni-381983" | sudo tee /etc/hostname
	I0416 01:23:09.755452   73348 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-381983
	
	I0416 01:23:09.755505   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHHostname
	I0416 01:23:09.759381   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:09.759744   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:3e:71", ip: ""} in network mk-enable-default-cni-381983: {Iface:virbr3 ExpiryTime:2024-04-16 02:23:02 +0000 UTC Type:0 Mac:52:54:00:d1:3e:71 Iaid: IPaddr:192.168.72.44 Prefix:24 Hostname:enable-default-cni-381983 Clientid:01:52:54:00:d1:3e:71}
	I0416 01:23:09.759778   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:09.759940   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHPort
	I0416 01:23:09.760140   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHKeyPath
	I0416 01:23:09.760336   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHKeyPath
	I0416 01:23:09.760480   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHUsername
	I0416 01:23:09.760651   73348 main.go:141] libmachine: Using SSH client type: native
	I0416 01:23:09.760855   73348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.44 22 <nil> <nil>}
	I0416 01:23:09.760882   73348 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-381983' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-381983/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-381983' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 01:23:09.884681   73348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 01:23:09.884717   73348 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 01:23:09.884752   73348 buildroot.go:174] setting up certificates
	I0416 01:23:09.884764   73348 provision.go:84] configureAuth start
	I0416 01:23:09.884782   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetMachineName
	I0416 01:23:09.885073   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetIP
	I0416 01:23:09.887902   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:10.006009   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:3e:71", ip: ""} in network mk-enable-default-cni-381983: {Iface:virbr3 ExpiryTime:2024-04-16 02:23:02 +0000 UTC Type:0 Mac:52:54:00:d1:3e:71 Iaid: IPaddr:192.168.72.44 Prefix:24 Hostname:enable-default-cni-381983 Clientid:01:52:54:00:d1:3e:71}
	I0416 01:23:10.006040   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:10.006230   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHHostname
	I0416 01:23:10.549887   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:10.550259   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:3e:71", ip: ""} in network mk-enable-default-cni-381983: {Iface:virbr3 ExpiryTime:2024-04-16 02:23:02 +0000 UTC Type:0 Mac:52:54:00:d1:3e:71 Iaid: IPaddr:192.168.72.44 Prefix:24 Hostname:enable-default-cni-381983 Clientid:01:52:54:00:d1:3e:71}
	I0416 01:23:10.550304   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:10.550432   73348 provision.go:143] copyHostCerts
	I0416 01:23:10.550493   73348 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 01:23:10.550503   73348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 01:23:10.550542   73348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 01:23:10.550635   73348 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 01:23:10.550643   73348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 01:23:10.550666   73348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 01:23:10.550720   73348 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 01:23:10.550728   73348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 01:23:10.550744   73348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 01:23:10.550791   73348 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-381983 san=[127.0.0.1 192.168.72.44 enable-default-cni-381983 localhost minikube]
	I0416 01:23:10.907024   73348 provision.go:177] copyRemoteCerts
	I0416 01:23:10.907069   73348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 01:23:10.907103   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHHostname
	I0416 01:23:10.909866   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:10.910229   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:3e:71", ip: ""} in network mk-enable-default-cni-381983: {Iface:virbr3 ExpiryTime:2024-04-16 02:23:02 +0000 UTC Type:0 Mac:52:54:00:d1:3e:71 Iaid: IPaddr:192.168.72.44 Prefix:24 Hostname:enable-default-cni-381983 Clientid:01:52:54:00:d1:3e:71}
	I0416 01:23:10.910257   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:10.910511   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHPort
	I0416 01:23:10.910750   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHKeyPath
	I0416 01:23:10.910940   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHUsername
	I0416 01:23:10.911074   73348 sshutil.go:53] new ssh client: &{IP:192.168.72.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/enable-default-cni-381983/id_rsa Username:docker}
	I0416 01:23:10.997458   73348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 01:23:11.022911   73348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0416 01:23:11.048292   73348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 01:23:11.072004   73348 provision.go:87] duration metric: took 1.187224889s to configureAuth
	I0416 01:23:11.072025   73348 buildroot.go:189] setting minikube options for container-runtime
	I0416 01:23:11.072170   73348 config.go:182] Loaded profile config "enable-default-cni-381983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:23:11.072239   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHHostname
	I0416 01:23:11.075333   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:11.075836   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:3e:71", ip: ""} in network mk-enable-default-cni-381983: {Iface:virbr3 ExpiryTime:2024-04-16 02:23:02 +0000 UTC Type:0 Mac:52:54:00:d1:3e:71 Iaid: IPaddr:192.168.72.44 Prefix:24 Hostname:enable-default-cni-381983 Clientid:01:52:54:00:d1:3e:71}
	I0416 01:23:11.075868   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:11.076016   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHPort
	I0416 01:23:11.076205   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHKeyPath
	I0416 01:23:11.076389   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHKeyPath
	I0416 01:23:11.076566   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHUsername
	I0416 01:23:11.076747   73348 main.go:141] libmachine: Using SSH client type: native
	I0416 01:23:11.077000   73348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.44 22 <nil> <nil>}
	I0416 01:23:11.077024   73348 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 01:23:11.370941   73348 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 01:23:11.370993   73348 main.go:141] libmachine: Checking connection to Docker...
	I0416 01:23:11.371004   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetURL
	I0416 01:23:11.372600   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | Using libvirt version 6000000
	I0416 01:23:11.375240   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:11.375549   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:3e:71", ip: ""} in network mk-enable-default-cni-381983: {Iface:virbr3 ExpiryTime:2024-04-16 02:23:02 +0000 UTC Type:0 Mac:52:54:00:d1:3e:71 Iaid: IPaddr:192.168.72.44 Prefix:24 Hostname:enable-default-cni-381983 Clientid:01:52:54:00:d1:3e:71}
	I0416 01:23:11.375576   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:11.375740   73348 main.go:141] libmachine: Docker is up and running!
	I0416 01:23:11.375755   73348 main.go:141] libmachine: Reticulating splines...
	I0416 01:23:11.375762   73348 client.go:171] duration metric: took 26.079774984s to LocalClient.Create
	I0416 01:23:11.375789   73348 start.go:167] duration metric: took 26.079844777s to libmachine.API.Create "enable-default-cni-381983"
	I0416 01:23:11.375799   73348 start.go:293] postStartSetup for "enable-default-cni-381983" (driver="kvm2")
	I0416 01:23:11.375811   73348 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 01:23:11.375835   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .DriverName
	I0416 01:23:11.376086   73348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 01:23:11.376107   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHHostname
	I0416 01:23:11.378274   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:11.378602   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:3e:71", ip: ""} in network mk-enable-default-cni-381983: {Iface:virbr3 ExpiryTime:2024-04-16 02:23:02 +0000 UTC Type:0 Mac:52:54:00:d1:3e:71 Iaid: IPaddr:192.168.72.44 Prefix:24 Hostname:enable-default-cni-381983 Clientid:01:52:54:00:d1:3e:71}
	I0416 01:23:11.378639   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:11.378717   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHPort
	I0416 01:23:11.378885   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHKeyPath
	I0416 01:23:11.379065   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHUsername
	I0416 01:23:11.379212   73348 sshutil.go:53] new ssh client: &{IP:192.168.72.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/enable-default-cni-381983/id_rsa Username:docker}
	I0416 01:23:11.460408   73348 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 01:23:11.464710   73348 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 01:23:11.464732   73348 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 01:23:11.464792   73348 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 01:23:11.464866   73348 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 01:23:11.464959   73348 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 01:23:11.475031   73348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:23:11.500707   73348 start.go:296] duration metric: took 124.896017ms for postStartSetup
	I0416 01:23:11.500750   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetConfigRaw
	I0416 01:23:11.501362   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetIP
	I0416 01:23:11.504115   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:11.504433   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:3e:71", ip: ""} in network mk-enable-default-cni-381983: {Iface:virbr3 ExpiryTime:2024-04-16 02:23:02 +0000 UTC Type:0 Mac:52:54:00:d1:3e:71 Iaid: IPaddr:192.168.72.44 Prefix:24 Hostname:enable-default-cni-381983 Clientid:01:52:54:00:d1:3e:71}
	I0416 01:23:11.504462   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:11.504741   73348 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/config.json ...
	I0416 01:23:11.504960   73348 start.go:128] duration metric: took 26.234190834s to createHost
	I0416 01:23:11.504985   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHHostname
	I0416 01:23:11.507203   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:11.507496   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:3e:71", ip: ""} in network mk-enable-default-cni-381983: {Iface:virbr3 ExpiryTime:2024-04-16 02:23:02 +0000 UTC Type:0 Mac:52:54:00:d1:3e:71 Iaid: IPaddr:192.168.72.44 Prefix:24 Hostname:enable-default-cni-381983 Clientid:01:52:54:00:d1:3e:71}
	I0416 01:23:11.507519   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:11.507625   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHPort
	I0416 01:23:11.507822   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHKeyPath
	I0416 01:23:11.507960   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHKeyPath
	I0416 01:23:11.508094   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHUsername
	I0416 01:23:11.508235   73348 main.go:141] libmachine: Using SSH client type: native
	I0416 01:23:11.508410   73348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.72.44 22 <nil> <nil>}
	I0416 01:23:11.508420   73348 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 01:23:11.605878   73348 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713230591.588148638
	
	I0416 01:23:11.605907   73348 fix.go:216] guest clock: 1713230591.588148638
	I0416 01:23:11.605918   73348 fix.go:229] Guest: 2024-04-16 01:23:11.588148638 +0000 UTC Remote: 2024-04-16 01:23:11.504973042 +0000 UTC m=+42.469147775 (delta=83.175596ms)
	I0416 01:23:11.605949   73348 fix.go:200] guest clock delta is within tolerance: 83.175596ms
	I0416 01:23:11.605954   73348 start.go:83] releasing machines lock for "enable-default-cni-381983", held for 26.33539559s
	I0416 01:23:11.605982   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .DriverName
	I0416 01:23:11.606250   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetIP
	I0416 01:23:11.609174   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:11.609532   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:3e:71", ip: ""} in network mk-enable-default-cni-381983: {Iface:virbr3 ExpiryTime:2024-04-16 02:23:02 +0000 UTC Type:0 Mac:52:54:00:d1:3e:71 Iaid: IPaddr:192.168.72.44 Prefix:24 Hostname:enable-default-cni-381983 Clientid:01:52:54:00:d1:3e:71}
	I0416 01:23:11.609571   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:11.609684   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .DriverName
	I0416 01:23:11.610211   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .DriverName
	I0416 01:23:11.610402   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .DriverName
	I0416 01:23:11.610495   73348 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 01:23:11.610535   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHHostname
	I0416 01:23:11.610633   73348 ssh_runner.go:195] Run: cat /version.json
	I0416 01:23:11.610652   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHHostname
	I0416 01:23:11.613422   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:11.613590   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:11.613843   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:3e:71", ip: ""} in network mk-enable-default-cni-381983: {Iface:virbr3 ExpiryTime:2024-04-16 02:23:02 +0000 UTC Type:0 Mac:52:54:00:d1:3e:71 Iaid: IPaddr:192.168.72.44 Prefix:24 Hostname:enable-default-cni-381983 Clientid:01:52:54:00:d1:3e:71}
	I0416 01:23:11.613881   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:11.614033   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHPort
	I0416 01:23:11.614165   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:3e:71", ip: ""} in network mk-enable-default-cni-381983: {Iface:virbr3 ExpiryTime:2024-04-16 02:23:02 +0000 UTC Type:0 Mac:52:54:00:d1:3e:71 Iaid: IPaddr:192.168.72.44 Prefix:24 Hostname:enable-default-cni-381983 Clientid:01:52:54:00:d1:3e:71}
	I0416 01:23:11.614186   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:11.614224   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHKeyPath
	I0416 01:23:11.614328   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHPort
	I0416 01:23:11.614411   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHUsername
	I0416 01:23:11.614493   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHKeyPath
	I0416 01:23:11.614595   73348 sshutil.go:53] new ssh client: &{IP:192.168.72.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/enable-default-cni-381983/id_rsa Username:docker}
	I0416 01:23:11.614657   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHUsername
	I0416 01:23:11.614788   73348 sshutil.go:53] new ssh client: &{IP:192.168.72.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/enable-default-cni-381983/id_rsa Username:docker}
	I0416 01:23:11.694810   73348 ssh_runner.go:195] Run: systemctl --version
	I0416 01:23:11.726582   73348 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 01:23:11.898072   73348 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 01:23:11.906906   73348 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 01:23:11.906963   73348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 01:23:11.924930   73348 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 01:23:11.924956   73348 start.go:494] detecting cgroup driver to use...
	I0416 01:23:11.925020   73348 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 01:23:11.946405   73348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 01:23:11.962591   73348 docker.go:217] disabling cri-docker service (if available) ...
	I0416 01:23:11.962637   73348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 01:23:11.978453   73348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 01:23:11.992801   73348 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 01:23:12.114230   73348 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:23:12.268795   73348 docker.go:233] disabling docker service ...
	I0416 01:23:12.268870   73348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:23:12.285925   73348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:23:12.301657   73348 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:23:12.442558   73348 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:23:12.570506   73348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:23:12.586518   73348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:23:12.606984   73348 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 01:23:12.607049   73348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:23:12.619096   73348 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:23:12.619154   73348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:23:12.631438   73348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:23:12.643049   73348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:23:12.654108   73348 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:23:12.665361   73348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:23:12.676439   73348 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:23:12.694169   73348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:23:12.705223   73348 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:23:12.716220   73348 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:23:12.716293   73348 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:23:12.739258   73348 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:23:12.755442   73348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:23:12.895244   73348 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:23:13.062671   73348 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:23:13.062757   73348 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:23:13.069596   73348 start.go:562] Will wait 60s for crictl version
	I0416 01:23:13.069673   73348 ssh_runner.go:195] Run: which crictl
	I0416 01:23:13.074194   73348 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:23:13.118869   73348 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:23:13.118972   73348 ssh_runner.go:195] Run: crio --version
	I0416 01:23:13.159792   73348 ssh_runner.go:195] Run: crio --version
	I0416 01:23:13.201901   73348 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 01:23:13.203302   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetIP
	I0416 01:23:13.206609   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:13.207048   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:3e:71", ip: ""} in network mk-enable-default-cni-381983: {Iface:virbr3 ExpiryTime:2024-04-16 02:23:02 +0000 UTC Type:0 Mac:52:54:00:d1:3e:71 Iaid: IPaddr:192.168.72.44 Prefix:24 Hostname:enable-default-cni-381983 Clientid:01:52:54:00:d1:3e:71}
	I0416 01:23:13.207087   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:13.207301   73348 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0416 01:23:13.212004   73348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:23:13.229631   73348 kubeadm.go:877] updating cluster {Name:enable-default-cni-381983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:enable-default-cni-381983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.44 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:23:13.229772   73348 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 01:23:13.229832   73348 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:23:13.271488   73348 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 01:23:13.271582   73348 ssh_runner.go:195] Run: which lz4
	I0416 01:23:13.277300   73348 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:23:13.284085   73348 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:23:13.284123   73348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 01:23:11.609064   75162 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0416 01:23:11.609273   75162 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:23:11.609322   75162 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:23:11.628756   75162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37773
	I0416 01:23:11.629174   75162 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:23:11.629754   75162 main.go:141] libmachine: Using API Version  1
	I0416 01:23:11.629782   75162 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:23:11.630176   75162 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:23:11.630397   75162 main.go:141] libmachine: (flannel-381983) Calling .GetMachineName
	I0416 01:23:11.630581   75162 main.go:141] libmachine: (flannel-381983) Calling .DriverName
	I0416 01:23:11.630733   75162 start.go:159] libmachine.API.Create for "flannel-381983" (driver="kvm2")
	I0416 01:23:11.630760   75162 client.go:168] LocalClient.Create starting
	I0416 01:23:11.630794   75162 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem
	I0416 01:23:11.630827   75162 main.go:141] libmachine: Decoding PEM data...
	I0416 01:23:11.630842   75162 main.go:141] libmachine: Parsing certificate...
	I0416 01:23:11.630905   75162 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem
	I0416 01:23:11.630931   75162 main.go:141] libmachine: Decoding PEM data...
	I0416 01:23:11.630949   75162 main.go:141] libmachine: Parsing certificate...
	I0416 01:23:11.630977   75162 main.go:141] libmachine: Running pre-create checks...
	I0416 01:23:11.630989   75162 main.go:141] libmachine: (flannel-381983) Calling .PreCreateCheck
	I0416 01:23:11.631335   75162 main.go:141] libmachine: (flannel-381983) Calling .GetConfigRaw
	I0416 01:23:11.631713   75162 main.go:141] libmachine: Creating machine...
	I0416 01:23:11.631730   75162 main.go:141] libmachine: (flannel-381983) Calling .Create
	I0416 01:23:11.631847   75162 main.go:141] libmachine: (flannel-381983) Creating KVM machine...
	I0416 01:23:11.633079   75162 main.go:141] libmachine: (flannel-381983) DBG | found existing default KVM network
	I0416 01:23:11.634661   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:11.634496   75186 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:a6:9d:07} reservation:<nil>}
	I0416 01:23:11.636012   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:11.635930   75186 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a4880}
	I0416 01:23:11.636046   75162 main.go:141] libmachine: (flannel-381983) DBG | created network xml: 
	I0416 01:23:11.636054   75162 main.go:141] libmachine: (flannel-381983) DBG | <network>
	I0416 01:23:11.636060   75162 main.go:141] libmachine: (flannel-381983) DBG |   <name>mk-flannel-381983</name>
	I0416 01:23:11.636068   75162 main.go:141] libmachine: (flannel-381983) DBG |   <dns enable='no'/>
	I0416 01:23:11.636083   75162 main.go:141] libmachine: (flannel-381983) DBG |   
	I0416 01:23:11.636098   75162 main.go:141] libmachine: (flannel-381983) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0416 01:23:11.636109   75162 main.go:141] libmachine: (flannel-381983) DBG |     <dhcp>
	I0416 01:23:11.636121   75162 main.go:141] libmachine: (flannel-381983) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0416 01:23:11.636131   75162 main.go:141] libmachine: (flannel-381983) DBG |     </dhcp>
	I0416 01:23:11.636141   75162 main.go:141] libmachine: (flannel-381983) DBG |   </ip>
	I0416 01:23:11.636147   75162 main.go:141] libmachine: (flannel-381983) DBG |   
	I0416 01:23:11.636159   75162 main.go:141] libmachine: (flannel-381983) DBG | </network>
	I0416 01:23:11.636174   75162 main.go:141] libmachine: (flannel-381983) DBG | 
	I0416 01:23:11.641533   75162 main.go:141] libmachine: (flannel-381983) DBG | trying to create private KVM network mk-flannel-381983 192.168.50.0/24...
	I0416 01:23:11.714878   75162 main.go:141] libmachine: (flannel-381983) DBG | private KVM network mk-flannel-381983 192.168.50.0/24 created
	I0416 01:23:11.714914   75162 main.go:141] libmachine: (flannel-381983) Setting up store path in /home/jenkins/minikube-integration/18647-7542/.minikube/machines/flannel-381983 ...
	I0416 01:23:11.714930   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:11.714840   75186 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 01:23:11.714950   75162 main.go:141] libmachine: (flannel-381983) Building disk image from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso
	I0416 01:23:11.715094   75162 main.go:141] libmachine: (flannel-381983) Downloading /home/jenkins/minikube-integration/18647-7542/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso...
	I0416 01:23:11.960390   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:11.960280   75186 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/flannel-381983/id_rsa...
	I0416 01:23:12.065251   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:12.065094   75186 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/flannel-381983/flannel-381983.rawdisk...
	I0416 01:23:12.065292   75162 main.go:141] libmachine: (flannel-381983) DBG | Writing magic tar header
	I0416 01:23:12.065308   75162 main.go:141] libmachine: (flannel-381983) DBG | Writing SSH key tar header
	I0416 01:23:12.065323   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:12.065253   75186 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/flannel-381983 ...
	I0416 01:23:12.065418   75162 main.go:141] libmachine: (flannel-381983) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines/flannel-381983 (perms=drwx------)
	I0416 01:23:12.065447   75162 main.go:141] libmachine: (flannel-381983) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube/machines (perms=drwxr-xr-x)
	I0416 01:23:12.065459   75162 main.go:141] libmachine: (flannel-381983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/flannel-381983
	I0416 01:23:12.065475   75162 main.go:141] libmachine: (flannel-381983) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542/.minikube (perms=drwxr-xr-x)
	I0416 01:23:12.065493   75162 main.go:141] libmachine: (flannel-381983) Setting executable bit set on /home/jenkins/minikube-integration/18647-7542 (perms=drwxrwxr-x)
	I0416 01:23:12.065503   75162 main.go:141] libmachine: (flannel-381983) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0416 01:23:12.065514   75162 main.go:141] libmachine: (flannel-381983) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0416 01:23:12.065524   75162 main.go:141] libmachine: (flannel-381983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube/machines
	I0416 01:23:12.065536   75162 main.go:141] libmachine: (flannel-381983) Creating domain...
	I0416 01:23:12.065549   75162 main.go:141] libmachine: (flannel-381983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 01:23:12.065569   75162 main.go:141] libmachine: (flannel-381983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18647-7542
	I0416 01:23:12.065583   75162 main.go:141] libmachine: (flannel-381983) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0416 01:23:12.065597   75162 main.go:141] libmachine: (flannel-381983) DBG | Checking permissions on dir: /home/jenkins
	I0416 01:23:12.065607   75162 main.go:141] libmachine: (flannel-381983) DBG | Checking permissions on dir: /home
	I0416 01:23:12.065625   75162 main.go:141] libmachine: (flannel-381983) DBG | Skipping /home - not owner
	I0416 01:23:12.066702   75162 main.go:141] libmachine: (flannel-381983) define libvirt domain using xml: 
	I0416 01:23:12.066730   75162 main.go:141] libmachine: (flannel-381983) <domain type='kvm'>
	I0416 01:23:12.066741   75162 main.go:141] libmachine: (flannel-381983)   <name>flannel-381983</name>
	I0416 01:23:12.066749   75162 main.go:141] libmachine: (flannel-381983)   <memory unit='MiB'>3072</memory>
	I0416 01:23:12.066762   75162 main.go:141] libmachine: (flannel-381983)   <vcpu>2</vcpu>
	I0416 01:23:12.066772   75162 main.go:141] libmachine: (flannel-381983)   <features>
	I0416 01:23:12.066780   75162 main.go:141] libmachine: (flannel-381983)     <acpi/>
	I0416 01:23:12.066788   75162 main.go:141] libmachine: (flannel-381983)     <apic/>
	I0416 01:23:12.066802   75162 main.go:141] libmachine: (flannel-381983)     <pae/>
	I0416 01:23:12.066809   75162 main.go:141] libmachine: (flannel-381983)     
	I0416 01:23:12.066822   75162 main.go:141] libmachine: (flannel-381983)   </features>
	I0416 01:23:12.066829   75162 main.go:141] libmachine: (flannel-381983)   <cpu mode='host-passthrough'>
	I0416 01:23:12.066834   75162 main.go:141] libmachine: (flannel-381983)   
	I0416 01:23:12.066839   75162 main.go:141] libmachine: (flannel-381983)   </cpu>
	I0416 01:23:12.066846   75162 main.go:141] libmachine: (flannel-381983)   <os>
	I0416 01:23:12.066853   75162 main.go:141] libmachine: (flannel-381983)     <type>hvm</type>
	I0416 01:23:12.066865   75162 main.go:141] libmachine: (flannel-381983)     <boot dev='cdrom'/>
	I0416 01:23:12.066875   75162 main.go:141] libmachine: (flannel-381983)     <boot dev='hd'/>
	I0416 01:23:12.066884   75162 main.go:141] libmachine: (flannel-381983)     <bootmenu enable='no'/>
	I0416 01:23:12.066893   75162 main.go:141] libmachine: (flannel-381983)   </os>
	I0416 01:23:12.066902   75162 main.go:141] libmachine: (flannel-381983)   <devices>
	I0416 01:23:12.066913   75162 main.go:141] libmachine: (flannel-381983)     <disk type='file' device='cdrom'>
	I0416 01:23:12.066925   75162 main.go:141] libmachine: (flannel-381983)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/flannel-381983/boot2docker.iso'/>
	I0416 01:23:12.066935   75162 main.go:141] libmachine: (flannel-381983)       <target dev='hdc' bus='scsi'/>
	I0416 01:23:12.066943   75162 main.go:141] libmachine: (flannel-381983)       <readonly/>
	I0416 01:23:12.066954   75162 main.go:141] libmachine: (flannel-381983)     </disk>
	I0416 01:23:12.066965   75162 main.go:141] libmachine: (flannel-381983)     <disk type='file' device='disk'>
	I0416 01:23:12.066984   75162 main.go:141] libmachine: (flannel-381983)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0416 01:23:12.066999   75162 main.go:141] libmachine: (flannel-381983)       <source file='/home/jenkins/minikube-integration/18647-7542/.minikube/machines/flannel-381983/flannel-381983.rawdisk'/>
	I0416 01:23:12.067010   75162 main.go:141] libmachine: (flannel-381983)       <target dev='hda' bus='virtio'/>
	I0416 01:23:12.067017   75162 main.go:141] libmachine: (flannel-381983)     </disk>
	I0416 01:23:12.067024   75162 main.go:141] libmachine: (flannel-381983)     <interface type='network'>
	I0416 01:23:12.067037   75162 main.go:141] libmachine: (flannel-381983)       <source network='mk-flannel-381983'/>
	I0416 01:23:12.067048   75162 main.go:141] libmachine: (flannel-381983)       <model type='virtio'/>
	I0416 01:23:12.067062   75162 main.go:141] libmachine: (flannel-381983)     </interface>
	I0416 01:23:12.067073   75162 main.go:141] libmachine: (flannel-381983)     <interface type='network'>
	I0416 01:23:12.067083   75162 main.go:141] libmachine: (flannel-381983)       <source network='default'/>
	I0416 01:23:12.067093   75162 main.go:141] libmachine: (flannel-381983)       <model type='virtio'/>
	I0416 01:23:12.067099   75162 main.go:141] libmachine: (flannel-381983)     </interface>
	I0416 01:23:12.067105   75162 main.go:141] libmachine: (flannel-381983)     <serial type='pty'>
	I0416 01:23:12.067114   75162 main.go:141] libmachine: (flannel-381983)       <target port='0'/>
	I0416 01:23:12.067124   75162 main.go:141] libmachine: (flannel-381983)     </serial>
	I0416 01:23:12.067137   75162 main.go:141] libmachine: (flannel-381983)     <console type='pty'>
	I0416 01:23:12.067148   75162 main.go:141] libmachine: (flannel-381983)       <target type='serial' port='0'/>
	I0416 01:23:12.067159   75162 main.go:141] libmachine: (flannel-381983)     </console>
	I0416 01:23:12.067169   75162 main.go:141] libmachine: (flannel-381983)     <rng model='virtio'>
	I0416 01:23:12.067187   75162 main.go:141] libmachine: (flannel-381983)       <backend model='random'>/dev/random</backend>
	I0416 01:23:12.067197   75162 main.go:141] libmachine: (flannel-381983)     </rng>
	I0416 01:23:12.067205   75162 main.go:141] libmachine: (flannel-381983)     
	I0416 01:23:12.067215   75162 main.go:141] libmachine: (flannel-381983)     
	I0416 01:23:12.067224   75162 main.go:141] libmachine: (flannel-381983)   </devices>
	I0416 01:23:12.067233   75162 main.go:141] libmachine: (flannel-381983) </domain>
	I0416 01:23:12.067242   75162 main.go:141] libmachine: (flannel-381983) 
	I0416 01:23:12.071368   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:83:03:f6 in network default
	I0416 01:23:12.071988   75162 main.go:141] libmachine: (flannel-381983) Ensuring networks are active...
	I0416 01:23:12.072018   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:12.072627   75162 main.go:141] libmachine: (flannel-381983) Ensuring network default is active
	I0416 01:23:12.073061   75162 main.go:141] libmachine: (flannel-381983) Ensuring network mk-flannel-381983 is active
	I0416 01:23:12.073719   75162 main.go:141] libmachine: (flannel-381983) Getting domain xml...
	I0416 01:23:12.074574   75162 main.go:141] libmachine: (flannel-381983) Creating domain...
	I0416 01:23:13.403507   75162 main.go:141] libmachine: (flannel-381983) Waiting to get IP...
	I0416 01:23:13.404587   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:13.405053   75162 main.go:141] libmachine: (flannel-381983) DBG | unable to find current IP address of domain flannel-381983 in network mk-flannel-381983
	I0416 01:23:13.405114   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:13.405063   75186 retry.go:31] will retry after 224.616726ms: waiting for machine to come up
	I0416 01:23:13.633131   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:13.633645   75162 main.go:141] libmachine: (flannel-381983) DBG | unable to find current IP address of domain flannel-381983 in network mk-flannel-381983
	I0416 01:23:13.633676   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:13.633597   75186 retry.go:31] will retry after 273.447762ms: waiting for machine to come up
	I0416 01:23:13.909282   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:13.909958   75162 main.go:141] libmachine: (flannel-381983) DBG | unable to find current IP address of domain flannel-381983 in network mk-flannel-381983
	I0416 01:23:13.909993   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:13.909867   75186 retry.go:31] will retry after 311.860293ms: waiting for machine to come up
	I0416 01:23:14.223671   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:14.224204   75162 main.go:141] libmachine: (flannel-381983) DBG | unable to find current IP address of domain flannel-381983 in network mk-flannel-381983
	I0416 01:23:14.224236   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:14.224166   75186 retry.go:31] will retry after 496.328911ms: waiting for machine to come up
	I0416 01:23:14.721910   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:14.722530   75162 main.go:141] libmachine: (flannel-381983) DBG | unable to find current IP address of domain flannel-381983 in network mk-flannel-381983
	I0416 01:23:14.722573   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:14.722437   75186 retry.go:31] will retry after 460.94439ms: waiting for machine to come up
	I0416 01:23:15.185239   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:15.185783   75162 main.go:141] libmachine: (flannel-381983) DBG | unable to find current IP address of domain flannel-381983 in network mk-flannel-381983
	I0416 01:23:15.185805   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:15.185746   75186 retry.go:31] will retry after 895.31394ms: waiting for machine to come up
	I0416 01:23:11.241679   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:11.742443   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:12.241610   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:12.742369   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:13.242366   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:13.741508   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:14.242413   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:14.741876   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:15.242368   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:15.742164   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:16.242119   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:16.742272   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:17.242117   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:17.742395   72357 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:17.858693   72357 kubeadm.go:1107] duration metric: took 12.268380765s to wait for elevateKubeSystemPrivileges
	W0416 01:23:17.858741   72357 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:23:17.858750   72357 kubeadm.go:393] duration metric: took 25.04715654s to StartCluster
	I0416 01:23:17.858771   72357 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:23:17.858861   72357 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:23:17.860915   72357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:23:17.861182   72357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 01:23:17.861192   72357 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.39.168 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:23:17.863170   72357 out.go:177] * Verifying Kubernetes components...
	I0416 01:23:17.861293   72357 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:23:17.861416   72357 config.go:182] Loaded profile config "custom-flannel-381983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:23:17.863258   72357 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-381983"
	I0416 01:23:17.864799   72357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:23:17.864828   72357 addons.go:234] Setting addon storage-provisioner=true in "custom-flannel-381983"
	I0416 01:23:17.863263   72357 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-381983"
	I0416 01:23:17.864875   72357 host.go:66] Checking if "custom-flannel-381983" exists ...
	I0416 01:23:17.864894   72357 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-381983"
	I0416 01:23:17.865261   72357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:23:17.865303   72357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:23:17.865329   72357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:23:17.865362   72357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:23:17.882838   72357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41359
	I0416 01:23:17.883415   72357 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:23:17.884019   72357 main.go:141] libmachine: Using API Version  1
	I0416 01:23:17.884040   72357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:23:17.884220   72357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32963
	I0416 01:23:17.884414   72357 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:23:17.884588   72357 main.go:141] libmachine: (custom-flannel-381983) Calling .GetState
	I0416 01:23:17.884628   72357 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:23:17.885088   72357 main.go:141] libmachine: Using API Version  1
	I0416 01:23:17.885107   72357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:23:17.885493   72357 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:23:17.888677   72357 addons.go:234] Setting addon default-storageclass=true in "custom-flannel-381983"
	I0416 01:23:17.888723   72357 host.go:66] Checking if "custom-flannel-381983" exists ...
	I0416 01:23:17.889107   72357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:23:17.889140   72357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:23:17.889809   72357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:23:17.889861   72357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:23:17.906162   72357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43433
	I0416 01:23:17.906754   72357 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:23:17.907657   72357 main.go:141] libmachine: Using API Version  1
	I0416 01:23:17.907687   72357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:23:17.908112   72357 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:23:17.908758   72357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:23:17.908804   72357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:23:17.908933   72357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45063
	I0416 01:23:17.909319   72357 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:23:17.913282   72357 main.go:141] libmachine: Using API Version  1
	I0416 01:23:17.913303   72357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:23:17.913837   72357 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:23:17.914039   72357 main.go:141] libmachine: (custom-flannel-381983) Calling .GetState
	I0416 01:23:17.917144   72357 main.go:141] libmachine: (custom-flannel-381983) Calling .DriverName
	I0416 01:23:17.919392   72357 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:23:14.908479   73348 crio.go:462] duration metric: took 1.631219336s to copy over tarball
	I0416 01:23:14.908537   73348 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:23:17.649536   73348 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.740971031s)
	I0416 01:23:17.649569   73348 crio.go:469] duration metric: took 2.741066118s to extract the tarball
	I0416 01:23:17.649578   73348 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:23:17.689287   73348 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:23:17.739178   73348 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 01:23:17.739211   73348 cache_images.go:84] Images are preloaded, skipping loading
	I0416 01:23:17.739229   73348 kubeadm.go:928] updating node { 192.168.72.44 8443 v1.29.3 crio true true} ...
	I0416 01:23:17.739372   73348 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-381983 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-381983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0416 01:23:17.739453   73348 ssh_runner.go:195] Run: crio config
	I0416 01:23:17.803645   73348 cni.go:84] Creating CNI manager for "bridge"
	I0416 01:23:17.803676   73348 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:23:17.803705   73348 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.44 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-381983 NodeName:enable-default-cni-381983 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.44"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.44 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 01:23:17.803946   73348 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.44
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-381983"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.44
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.44"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:23:17.804024   73348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 01:23:17.817245   73348 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:23:17.817335   73348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:23:17.828413   73348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0416 01:23:17.852639   73348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:23:17.882680   73348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0416 01:23:17.913539   73348 ssh_runner.go:195] Run: grep 192.168.72.44	control-plane.minikube.internal$ /etc/hosts
	I0416 01:23:17.919778   73348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.44	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:23:17.940164   73348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:23:18.103307   73348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:23:18.124425   73348 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983 for IP: 192.168.72.44
	I0416 01:23:18.124470   73348 certs.go:194] generating shared ca certs ...
	I0416 01:23:18.124492   73348 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:23:18.124685   73348 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:23:18.124741   73348 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:23:18.124760   73348 certs.go:256] generating profile certs ...
	I0416 01:23:18.124833   73348 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/client.key
	I0416 01:23:18.124853   73348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/client.crt with IP's: []
	I0416 01:23:18.393944   73348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/client.crt ...
	I0416 01:23:18.393973   73348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/client.crt: {Name:mkef64aaff98345f8d9d9cb36516d99cb947efa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:23:18.394169   73348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/client.key ...
	I0416 01:23:18.394187   73348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/client.key: {Name:mkf9414712ddcacb183ae37adf996a81834274b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:23:18.394339   73348 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/apiserver.key.61062e0d
	I0416 01:23:18.394360   73348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/apiserver.crt.61062e0d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.44]
	I0416 01:23:18.473496   73348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/apiserver.crt.61062e0d ...
	I0416 01:23:18.473527   73348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/apiserver.crt.61062e0d: {Name:mk327458147565efd8a5947caaec4f72144fb2b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:23:18.494600   73348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/apiserver.key.61062e0d ...
	I0416 01:23:18.494636   73348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/apiserver.key.61062e0d: {Name:mkc7ba6c7a675637515de46f20e0688b0711ae46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:23:18.494755   73348 certs.go:381] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/apiserver.crt.61062e0d -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/apiserver.crt
	I0416 01:23:18.494869   73348 certs.go:385] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/apiserver.key.61062e0d -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/apiserver.key
	I0416 01:23:18.494953   73348 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/proxy-client.key
	I0416 01:23:18.494974   73348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/proxy-client.crt with IP's: []
	I0416 01:23:18.581344   73348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/proxy-client.crt ...
	I0416 01:23:18.581373   73348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/proxy-client.crt: {Name:mkcc98ff27b5d82412a274cd149b4b400768bd6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:23:18.581554   73348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/proxy-client.key ...
	I0416 01:23:18.581575   73348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/proxy-client.key: {Name:mkd140d3df27a2ee86f886581576524e7ecf68ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:23:18.581792   73348 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:23:18.581836   73348 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:23:18.581851   73348 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:23:18.581884   73348 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:23:18.581912   73348 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:23:18.581948   73348 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:23:18.582005   73348 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:23:18.582604   73348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:23:18.613214   73348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:23:18.643866   73348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:23:18.673261   73348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:23:18.705268   73348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0416 01:23:18.742162   73348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 01:23:18.779578   73348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:23:18.811374   73348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/enable-default-cni-381983/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 01:23:18.853446   73348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:23:18.889710   73348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:23:18.920176   73348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:23:18.949400   73348 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:23:18.971017   73348 ssh_runner.go:195] Run: openssl version
	I0416 01:23:18.978789   73348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:23:18.992973   73348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:23:18.998133   73348 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:23:18.998249   73348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:23:19.004516   73348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:23:19.020483   73348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:23:19.036475   73348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:23:19.041959   73348 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:23:19.042041   73348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:23:19.050181   73348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:23:19.066250   73348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:23:19.080863   73348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:23:17.921323   72357 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:23:17.921337   72357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:23:17.921352   72357 main.go:141] libmachine: (custom-flannel-381983) Calling .GetSSHHostname
	I0416 01:23:17.924632   72357 main.go:141] libmachine: (custom-flannel-381983) DBG | domain custom-flannel-381983 has defined MAC address 52:54:00:d7:6b:54 in network mk-custom-flannel-381983
	I0416 01:23:17.925059   72357 main.go:141] libmachine: (custom-flannel-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:6b:54", ip: ""} in network mk-custom-flannel-381983: {Iface:virbr2 ExpiryTime:2024-04-16 02:22:37 +0000 UTC Type:0 Mac:52:54:00:d7:6b:54 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:custom-flannel-381983 Clientid:01:52:54:00:d7:6b:54}
	I0416 01:23:17.925080   72357 main.go:141] libmachine: (custom-flannel-381983) DBG | domain custom-flannel-381983 has defined IP address 192.168.39.168 and MAC address 52:54:00:d7:6b:54 in network mk-custom-flannel-381983
	I0416 01:23:17.925444   72357 main.go:141] libmachine: (custom-flannel-381983) Calling .GetSSHPort
	I0416 01:23:17.926433   72357 main.go:141] libmachine: (custom-flannel-381983) Calling .GetSSHKeyPath
	I0416 01:23:17.926617   72357 main.go:141] libmachine: (custom-flannel-381983) Calling .GetSSHUsername
	I0416 01:23:17.926740   72357 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/custom-flannel-381983/id_rsa Username:docker}
	I0416 01:23:17.931397   72357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32937
	I0416 01:23:17.931829   72357 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:23:17.932359   72357 main.go:141] libmachine: Using API Version  1
	I0416 01:23:17.932381   72357 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:23:17.932745   72357 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:23:17.933045   72357 main.go:141] libmachine: (custom-flannel-381983) Calling .GetState
	I0416 01:23:17.934641   72357 main.go:141] libmachine: (custom-flannel-381983) Calling .DriverName
	I0416 01:23:17.935135   72357 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:23:17.935152   72357 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:23:17.935170   72357 main.go:141] libmachine: (custom-flannel-381983) Calling .GetSSHHostname
	I0416 01:23:17.938256   72357 main.go:141] libmachine: (custom-flannel-381983) DBG | domain custom-flannel-381983 has defined MAC address 52:54:00:d7:6b:54 in network mk-custom-flannel-381983
	I0416 01:23:17.938596   72357 main.go:141] libmachine: (custom-flannel-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d7:6b:54", ip: ""} in network mk-custom-flannel-381983: {Iface:virbr2 ExpiryTime:2024-04-16 02:22:37 +0000 UTC Type:0 Mac:52:54:00:d7:6b:54 Iaid: IPaddr:192.168.39.168 Prefix:24 Hostname:custom-flannel-381983 Clientid:01:52:54:00:d7:6b:54}
	I0416 01:23:17.938615   72357 main.go:141] libmachine: (custom-flannel-381983) DBG | domain custom-flannel-381983 has defined IP address 192.168.39.168 and MAC address 52:54:00:d7:6b:54 in network mk-custom-flannel-381983
	I0416 01:23:17.938780   72357 main.go:141] libmachine: (custom-flannel-381983) Calling .GetSSHPort
	I0416 01:23:17.939896   72357 main.go:141] libmachine: (custom-flannel-381983) Calling .GetSSHKeyPath
	I0416 01:23:17.940055   72357 main.go:141] libmachine: (custom-flannel-381983) Calling .GetSSHUsername
	I0416 01:23:17.940203   72357 sshutil.go:53] new ssh client: &{IP:192.168.39.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/custom-flannel-381983/id_rsa Username:docker}
	I0416 01:23:18.096005   72357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 01:23:18.132441   72357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:23:18.359488   72357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:23:18.384618   72357 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:23:18.948328   72357 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0416 01:23:18.949754   72357 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-381983" to be "Ready" ...
	I0416 01:23:20.087720   72357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.728192119s)
	I0416 01:23:20.087766   72357 main.go:141] libmachine: Making call to close driver server
	I0416 01:23:20.087777   72357 main.go:141] libmachine: (custom-flannel-381983) Calling .Close
	I0416 01:23:20.088072   72357 main.go:141] libmachine: (custom-flannel-381983) DBG | Closing plugin on server side
	I0416 01:23:20.088132   72357 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:23:20.088163   72357 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:23:20.088173   72357 main.go:141] libmachine: Making call to close driver server
	I0416 01:23:20.088180   72357 main.go:141] libmachine: (custom-flannel-381983) Calling .Close
	I0416 01:23:20.088507   72357 main.go:141] libmachine: (custom-flannel-381983) DBG | Closing plugin on server side
	I0416 01:23:20.088508   72357 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:23:20.088536   72357 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:23:20.103864   72357 kapi.go:248] "coredns" deployment in "kube-system" namespace and "custom-flannel-381983" context rescaled to 1 replicas
	I0416 01:23:20.120666   72357 main.go:141] libmachine: Making call to close driver server
	I0416 01:23:20.120688   72357 main.go:141] libmachine: (custom-flannel-381983) Calling .Close
	I0416 01:23:20.120961   72357 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:23:20.120973   72357 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:23:20.217833   72357 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.833174557s)
	I0416 01:23:20.217883   72357 main.go:141] libmachine: Making call to close driver server
	I0416 01:23:20.217896   72357 main.go:141] libmachine: (custom-flannel-381983) Calling .Close
	I0416 01:23:20.218286   72357 main.go:141] libmachine: (custom-flannel-381983) DBG | Closing plugin on server side
	I0416 01:23:20.218292   72357 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:23:20.218323   72357 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:23:20.218337   72357 main.go:141] libmachine: Making call to close driver server
	I0416 01:23:20.218345   72357 main.go:141] libmachine: (custom-flannel-381983) Calling .Close
	I0416 01:23:20.218606   72357 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:23:20.218621   72357 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:23:20.220985   72357 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0416 01:23:16.082494   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:16.083141   75162 main.go:141] libmachine: (flannel-381983) DBG | unable to find current IP address of domain flannel-381983 in network mk-flannel-381983
	I0416 01:23:16.083172   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:16.083064   75186 retry.go:31] will retry after 1.034396902s: waiting for machine to come up
	I0416 01:23:17.119553   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:17.120118   75162 main.go:141] libmachine: (flannel-381983) DBG | unable to find current IP address of domain flannel-381983 in network mk-flannel-381983
	I0416 01:23:17.120157   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:17.120048   75186 retry.go:31] will retry after 1.305649352s: waiting for machine to come up
	I0416 01:23:18.427603   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:18.427991   75162 main.go:141] libmachine: (flannel-381983) DBG | unable to find current IP address of domain flannel-381983 in network mk-flannel-381983
	I0416 01:23:18.428015   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:18.427945   75186 retry.go:31] will retry after 1.214472055s: waiting for machine to come up
	I0416 01:23:19.644352   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:19.644936   75162 main.go:141] libmachine: (flannel-381983) DBG | unable to find current IP address of domain flannel-381983 in network mk-flannel-381983
	I0416 01:23:19.644965   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:19.644894   75186 retry.go:31] will retry after 1.860426392s: waiting for machine to come up
	I0416 01:23:20.222667   72357 addons.go:505] duration metric: took 2.361390496s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0416 01:23:20.954322   72357 node_ready.go:53] node "custom-flannel-381983" has status "Ready":"False"
	I0416 01:23:19.085996   73348 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:23:19.293434   73348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:23:19.301130   73348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:23:19.314384   73348 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:23:19.319024   73348 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 01:23:19.319086   73348 kubeadm.go:391] StartCluster: {Name:enable-default-cni-381983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.29.3 ClusterName:enable-default-cni-381983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.44 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:23:19.319187   73348 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:23:19.319276   73348 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:23:19.366128   73348 cri.go:89] found id: ""
	I0416 01:23:19.366200   73348 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 01:23:19.378276   73348 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:23:19.389809   73348 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:23:19.401432   73348 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:23:19.401449   73348 kubeadm.go:156] found existing configuration files:
	
	I0416 01:23:19.401493   73348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:23:19.411763   73348 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:23:19.411820   73348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:23:19.422548   73348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:23:19.432878   73348 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:23:19.432949   73348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:23:19.444731   73348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:23:19.456390   73348 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:23:19.456448   73348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:23:19.468808   73348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:23:19.480989   73348 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:23:19.481068   73348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:23:19.492125   73348 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:23:19.744593   73348 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:23:21.507512   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:21.508085   75162 main.go:141] libmachine: (flannel-381983) DBG | unable to find current IP address of domain flannel-381983 in network mk-flannel-381983
	I0416 01:23:21.508114   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:21.508040   75186 retry.go:31] will retry after 2.658524028s: waiting for machine to come up
	I0416 01:23:24.169359   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:24.169915   75162 main.go:141] libmachine: (flannel-381983) DBG | unable to find current IP address of domain flannel-381983 in network mk-flannel-381983
	I0416 01:23:24.169944   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:24.169876   75186 retry.go:31] will retry after 2.760444302s: waiting for machine to come up
	I0416 01:23:22.954372   72357 node_ready.go:53] node "custom-flannel-381983" has status "Ready":"False"
	I0416 01:23:24.955035   72357 node_ready.go:53] node "custom-flannel-381983" has status "Ready":"False"
	I0416 01:23:26.931612   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:26.932279   75162 main.go:141] libmachine: (flannel-381983) DBG | unable to find current IP address of domain flannel-381983 in network mk-flannel-381983
	I0416 01:23:26.932308   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:26.932190   75186 retry.go:31] will retry after 2.781792876s: waiting for machine to come up
	I0416 01:23:29.716807   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:29.717305   75162 main.go:141] libmachine: (flannel-381983) DBG | unable to find current IP address of domain flannel-381983 in network mk-flannel-381983
	I0416 01:23:29.717339   75162 main.go:141] libmachine: (flannel-381983) DBG | I0416 01:23:29.717256   75186 retry.go:31] will retry after 5.581429058s: waiting for machine to come up
	I0416 01:23:30.964131   73348 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 01:23:30.964210   73348 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:23:30.964321   73348 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:23:30.964487   73348 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:23:30.964636   73348 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:23:30.964749   73348 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:23:30.966747   73348 out.go:204]   - Generating certificates and keys ...
	I0416 01:23:30.966860   73348 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:23:30.966937   73348 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:23:30.967016   73348 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0416 01:23:30.967086   73348 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0416 01:23:30.967170   73348 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0416 01:23:30.967241   73348 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0416 01:23:30.967320   73348 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0416 01:23:30.967483   73348 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-381983 localhost] and IPs [192.168.72.44 127.0.0.1 ::1]
	I0416 01:23:30.967550   73348 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0416 01:23:30.967704   73348 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-381983 localhost] and IPs [192.168.72.44 127.0.0.1 ::1]
	I0416 01:23:30.967788   73348 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0416 01:23:30.967896   73348 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0416 01:23:30.967974   73348 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0416 01:23:30.968065   73348 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:23:30.968149   73348 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:23:30.968243   73348 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 01:23:30.968321   73348 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:23:30.968403   73348 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:23:30.968511   73348 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:23:30.968623   73348 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:23:30.968731   73348 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:23:30.971301   73348 out.go:204]   - Booting up control plane ...
	I0416 01:23:30.971404   73348 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:23:30.971502   73348 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:23:30.971582   73348 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:23:30.971729   73348 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:23:30.971838   73348 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:23:30.971898   73348 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:23:30.972092   73348 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:23:30.972198   73348 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002018 seconds
	I0416 01:23:30.972361   73348 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 01:23:30.972535   73348 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 01:23:30.972609   73348 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 01:23:30.972844   73348 kubeadm.go:309] [mark-control-plane] Marking the node enable-default-cni-381983 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 01:23:30.972916   73348 kubeadm.go:309] [bootstrap-token] Using token: k0h5oa.s27mndxlzaegf6pv
	I0416 01:23:30.974444   73348 out.go:204]   - Configuring RBAC rules ...
	I0416 01:23:30.974583   73348 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 01:23:30.974690   73348 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 01:23:30.974868   73348 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 01:23:30.975022   73348 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 01:23:30.975187   73348 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 01:23:30.975332   73348 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 01:23:30.975491   73348 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 01:23:30.975569   73348 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 01:23:30.975655   73348 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 01:23:30.975670   73348 kubeadm.go:309] 
	I0416 01:23:30.975744   73348 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 01:23:30.975753   73348 kubeadm.go:309] 
	I0416 01:23:30.975857   73348 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 01:23:30.975874   73348 kubeadm.go:309] 
	I0416 01:23:30.975907   73348 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 01:23:30.975974   73348 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 01:23:30.976093   73348 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 01:23:30.976118   73348 kubeadm.go:309] 
	I0416 01:23:30.976206   73348 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 01:23:30.976215   73348 kubeadm.go:309] 
	I0416 01:23:30.976264   73348 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 01:23:30.976273   73348 kubeadm.go:309] 
	I0416 01:23:30.976337   73348 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 01:23:30.976425   73348 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 01:23:30.976526   73348 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 01:23:30.976542   73348 kubeadm.go:309] 
	I0416 01:23:30.976647   73348 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 01:23:30.976768   73348 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 01:23:30.976781   73348 kubeadm.go:309] 
	I0416 01:23:30.976879   73348 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token k0h5oa.s27mndxlzaegf6pv \
	I0416 01:23:30.977037   73348 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0416 01:23:30.977072   73348 kubeadm.go:309] 	--control-plane 
	I0416 01:23:30.977078   73348 kubeadm.go:309] 
	I0416 01:23:30.977230   73348 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 01:23:30.977241   73348 kubeadm.go:309] 
	I0416 01:23:30.977365   73348 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token k0h5oa.s27mndxlzaegf6pv \
	I0416 01:23:30.977517   73348 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0416 01:23:30.977540   73348 cni.go:84] Creating CNI manager for "bridge"
	I0416 01:23:30.979082   73348 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:23:27.454089   72357 node_ready.go:49] node "custom-flannel-381983" has status "Ready":"True"
	I0416 01:23:27.454118   72357 node_ready.go:38] duration metric: took 8.504312695s for node "custom-flannel-381983" to be "Ready" ...
	I0416 01:23:27.454130   72357 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:23:27.465006   72357 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-54x8q" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:29.472582   72357 pod_ready.go:102] pod "coredns-76f75df574-54x8q" in "kube-system" namespace has status "Ready":"False"
	I0416 01:23:30.980494   73348 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:23:31.005289   73348 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:23:31.060598   73348 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 01:23:31.060657   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes enable-default-cni-381983 minikube.k8s.io/updated_at=2024_04_16T01_23_31_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=enable-default-cni-381983 minikube.k8s.io/primary=true
	I0416 01:23:31.060668   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:31.150252   73348 ops.go:34] apiserver oom_adj: -16
	I0416 01:23:31.402239   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:31.902880   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:32.402644   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:32.902575   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:33.402593   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:33.902508   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:35.303241   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:35.303710   75162 main.go:141] libmachine: (flannel-381983) Found IP for machine: 192.168.50.155
	I0416 01:23:35.303740   75162 main.go:141] libmachine: (flannel-381983) Reserving static IP address...
	I0416 01:23:35.303754   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has current primary IP address 192.168.50.155 and MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:35.304144   75162 main.go:141] libmachine: (flannel-381983) DBG | unable to find host DHCP lease matching {name: "flannel-381983", mac: "52:54:00:ed:0e:ec", ip: "192.168.50.155"} in network mk-flannel-381983
	I0416 01:23:35.377172   75162 main.go:141] libmachine: (flannel-381983) DBG | Getting to WaitForSSH function...
	I0416 01:23:35.377205   75162 main.go:141] libmachine: (flannel-381983) Reserved static IP address: 192.168.50.155
	I0416 01:23:35.377218   75162 main.go:141] libmachine: (flannel-381983) Waiting for SSH to be available...
	I0416 01:23:35.379774   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:35.380285   75162 main.go:141] libmachine: (flannel-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:ec", ip: ""} in network mk-flannel-381983: {Iface:virbr4 ExpiryTime:2024-04-16 02:23:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:ec Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ed:0e:ec}
	I0416 01:23:35.380312   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined IP address 192.168.50.155 and MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:35.380338   75162 main.go:141] libmachine: (flannel-381983) DBG | Using SSH client type: external
	I0416 01:23:35.380355   75162 main.go:141] libmachine: (flannel-381983) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/flannel-381983/id_rsa (-rw-------)
	I0416 01:23:35.380426   75162 main.go:141] libmachine: (flannel-381983) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.155 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/flannel-381983/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 01:23:35.380461   75162 main.go:141] libmachine: (flannel-381983) DBG | About to run SSH command:
	I0416 01:23:35.380480   75162 main.go:141] libmachine: (flannel-381983) DBG | exit 0
	I0416 01:23:35.505227   75162 main.go:141] libmachine: (flannel-381983) DBG | SSH cmd err, output: <nil>: 
	I0416 01:23:35.505519   75162 main.go:141] libmachine: (flannel-381983) KVM machine creation complete!
	I0416 01:23:35.505838   75162 main.go:141] libmachine: (flannel-381983) Calling .GetConfigRaw
	I0416 01:23:35.506315   75162 main.go:141] libmachine: (flannel-381983) Calling .DriverName
	I0416 01:23:35.506538   75162 main.go:141] libmachine: (flannel-381983) Calling .DriverName
	I0416 01:23:35.506781   75162 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0416 01:23:35.506798   75162 main.go:141] libmachine: (flannel-381983) Calling .GetState
	I0416 01:23:35.508165   75162 main.go:141] libmachine: Detecting operating system of created instance...
	I0416 01:23:35.508177   75162 main.go:141] libmachine: Waiting for SSH to be available...
	I0416 01:23:35.508183   75162 main.go:141] libmachine: Getting to WaitForSSH function...
	I0416 01:23:35.508188   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHHostname
	I0416 01:23:35.510376   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:35.510797   75162 main.go:141] libmachine: (flannel-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:ec", ip: ""} in network mk-flannel-381983: {Iface:virbr4 ExpiryTime:2024-04-16 02:23:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:ec Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:flannel-381983 Clientid:01:52:54:00:ed:0e:ec}
	I0416 01:23:35.510830   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined IP address 192.168.50.155 and MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:35.510921   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHPort
	I0416 01:23:35.511094   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHKeyPath
	I0416 01:23:35.511245   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHKeyPath
	I0416 01:23:35.511435   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHUsername
	I0416 01:23:35.511603   75162 main.go:141] libmachine: Using SSH client type: native
	I0416 01:23:35.511795   75162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.155 22 <nil> <nil>}
	I0416 01:23:35.511807   75162 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0416 01:23:35.616518   75162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 01:23:35.616547   75162 main.go:141] libmachine: Detecting the provisioner...
	I0416 01:23:35.616557   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHHostname
	I0416 01:23:35.619430   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:35.619857   75162 main.go:141] libmachine: (flannel-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:ec", ip: ""} in network mk-flannel-381983: {Iface:virbr4 ExpiryTime:2024-04-16 02:23:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:ec Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:flannel-381983 Clientid:01:52:54:00:ed:0e:ec}
	I0416 01:23:35.619912   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined IP address 192.168.50.155 and MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:35.620041   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHPort
	I0416 01:23:35.620264   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHKeyPath
	I0416 01:23:35.620424   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHKeyPath
	I0416 01:23:35.620585   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHUsername
	I0416 01:23:35.620744   75162 main.go:141] libmachine: Using SSH client type: native
	I0416 01:23:35.620957   75162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.155 22 <nil> <nil>}
	I0416 01:23:35.620985   75162 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0416 01:23:35.726209   75162 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0416 01:23:35.726313   75162 main.go:141] libmachine: found compatible host: buildroot
	I0416 01:23:35.726326   75162 main.go:141] libmachine: Provisioning with buildroot...
	I0416 01:23:35.726339   75162 main.go:141] libmachine: (flannel-381983) Calling .GetMachineName
	I0416 01:23:35.726569   75162 buildroot.go:166] provisioning hostname "flannel-381983"
	I0416 01:23:35.726594   75162 main.go:141] libmachine: (flannel-381983) Calling .GetMachineName
	I0416 01:23:35.726782   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHHostname
	I0416 01:23:35.729654   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:35.730060   75162 main.go:141] libmachine: (flannel-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:ec", ip: ""} in network mk-flannel-381983: {Iface:virbr4 ExpiryTime:2024-04-16 02:23:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:ec Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:flannel-381983 Clientid:01:52:54:00:ed:0e:ec}
	I0416 01:23:35.730089   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined IP address 192.168.50.155 and MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:35.730190   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHPort
	I0416 01:23:35.730393   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHKeyPath
	I0416 01:23:35.730573   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHKeyPath
	I0416 01:23:35.730706   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHUsername
	I0416 01:23:35.730886   75162 main.go:141] libmachine: Using SSH client type: native
	I0416 01:23:35.731071   75162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.155 22 <nil> <nil>}
	I0416 01:23:35.731087   75162 main.go:141] libmachine: About to run SSH command:
	sudo hostname flannel-381983 && echo "flannel-381983" | sudo tee /etc/hostname
	I0416 01:23:35.849830   75162 main.go:141] libmachine: SSH cmd err, output: <nil>: flannel-381983
	
	I0416 01:23:35.849865   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHHostname
	I0416 01:23:35.852741   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:35.853063   75162 main.go:141] libmachine: (flannel-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:ec", ip: ""} in network mk-flannel-381983: {Iface:virbr4 ExpiryTime:2024-04-16 02:23:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:ec Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:flannel-381983 Clientid:01:52:54:00:ed:0e:ec}
	I0416 01:23:35.853096   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined IP address 192.168.50.155 and MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:35.853238   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHPort
	I0416 01:23:35.853459   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHKeyPath
	I0416 01:23:35.853635   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHKeyPath
	I0416 01:23:35.853756   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHUsername
	I0416 01:23:35.853967   75162 main.go:141] libmachine: Using SSH client type: native
	I0416 01:23:35.854175   75162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.155 22 <nil> <nil>}
	I0416 01:23:35.854193   75162 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sflannel-381983' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 flannel-381983/g' /etc/hosts;
				else 
					echo '127.0.1.1 flannel-381983' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 01:23:31.971843   72357 pod_ready.go:102] pod "coredns-76f75df574-54x8q" in "kube-system" namespace has status "Ready":"False"
	I0416 01:23:33.974123   72357 pod_ready.go:102] pod "coredns-76f75df574-54x8q" in "kube-system" namespace has status "Ready":"False"
	I0416 01:23:35.970993   75162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 01:23:35.971037   75162 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 01:23:35.971070   75162 buildroot.go:174] setting up certificates
	I0416 01:23:35.971082   75162 provision.go:84] configureAuth start
	I0416 01:23:35.971098   75162 main.go:141] libmachine: (flannel-381983) Calling .GetMachineName
	I0416 01:23:35.971406   75162 main.go:141] libmachine: (flannel-381983) Calling .GetIP
	I0416 01:23:35.974733   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:35.975258   75162 main.go:141] libmachine: (flannel-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:ec", ip: ""} in network mk-flannel-381983: {Iface:virbr4 ExpiryTime:2024-04-16 02:23:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:ec Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:flannel-381983 Clientid:01:52:54:00:ed:0e:ec}
	I0416 01:23:35.975286   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined IP address 192.168.50.155 and MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:35.975488   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHHostname
	I0416 01:23:35.978180   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:35.978618   75162 main.go:141] libmachine: (flannel-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:ec", ip: ""} in network mk-flannel-381983: {Iface:virbr4 ExpiryTime:2024-04-16 02:23:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:ec Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:flannel-381983 Clientid:01:52:54:00:ed:0e:ec}
	I0416 01:23:35.978641   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined IP address 192.168.50.155 and MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:35.978794   75162 provision.go:143] copyHostCerts
	I0416 01:23:35.978849   75162 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 01:23:35.978872   75162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 01:23:35.978930   75162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 01:23:35.979036   75162 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 01:23:35.979047   75162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 01:23:35.979095   75162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 01:23:35.979164   75162 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 01:23:35.979174   75162 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 01:23:35.979202   75162 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 01:23:35.979265   75162 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.flannel-381983 san=[127.0.0.1 192.168.50.155 flannel-381983 localhost minikube]
	I0416 01:23:36.050446   75162 provision.go:177] copyRemoteCerts
	I0416 01:23:36.050511   75162 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 01:23:36.050539   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHHostname
	I0416 01:23:36.053502   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:36.053933   75162 main.go:141] libmachine: (flannel-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:ec", ip: ""} in network mk-flannel-381983: {Iface:virbr4 ExpiryTime:2024-04-16 02:23:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:ec Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:flannel-381983 Clientid:01:52:54:00:ed:0e:ec}
	I0416 01:23:36.053965   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined IP address 192.168.50.155 and MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:36.054163   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHPort
	I0416 01:23:36.054353   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHKeyPath
	I0416 01:23:36.054533   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHUsername
	I0416 01:23:36.054708   75162 sshutil.go:53] new ssh client: &{IP:192.168.50.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/flannel-381983/id_rsa Username:docker}
	I0416 01:23:36.137310   75162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 01:23:36.163015   75162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0416 01:23:36.191822   75162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 01:23:36.217451   75162 provision.go:87] duration metric: took 246.353518ms to configureAuth
	I0416 01:23:36.217492   75162 buildroot.go:189] setting minikube options for container-runtime
	I0416 01:23:36.217676   75162 config.go:182] Loaded profile config "flannel-381983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:23:36.217757   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHHostname
	I0416 01:23:36.220482   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:36.220801   75162 main.go:141] libmachine: (flannel-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:ec", ip: ""} in network mk-flannel-381983: {Iface:virbr4 ExpiryTime:2024-04-16 02:23:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:ec Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:flannel-381983 Clientid:01:52:54:00:ed:0e:ec}
	I0416 01:23:36.220840   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined IP address 192.168.50.155 and MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:36.220975   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHPort
	I0416 01:23:36.221186   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHKeyPath
	I0416 01:23:36.221365   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHKeyPath
	I0416 01:23:36.221516   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHUsername
	I0416 01:23:36.221687   75162 main.go:141] libmachine: Using SSH client type: native
	I0416 01:23:36.221925   75162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.155 22 <nil> <nil>}
	I0416 01:23:36.221956   75162 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 01:23:36.507385   75162 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 01:23:36.507417   75162 main.go:141] libmachine: Checking connection to Docker...
	I0416 01:23:36.507427   75162 main.go:141] libmachine: (flannel-381983) Calling .GetURL
	I0416 01:23:36.508678   75162 main.go:141] libmachine: (flannel-381983) DBG | Using libvirt version 6000000
	I0416 01:23:36.510711   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:36.511031   75162 main.go:141] libmachine: (flannel-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:ec", ip: ""} in network mk-flannel-381983: {Iface:virbr4 ExpiryTime:2024-04-16 02:23:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:ec Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:flannel-381983 Clientid:01:52:54:00:ed:0e:ec}
	I0416 01:23:36.511071   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined IP address 192.168.50.155 and MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:36.511196   75162 main.go:141] libmachine: Docker is up and running!
	I0416 01:23:36.511214   75162 main.go:141] libmachine: Reticulating splines...
	I0416 01:23:36.511222   75162 client.go:171] duration metric: took 24.880449299s to LocalClient.Create
	I0416 01:23:36.511246   75162 start.go:167] duration metric: took 24.880514003s to libmachine.API.Create "flannel-381983"
	I0416 01:23:36.511258   75162 start.go:293] postStartSetup for "flannel-381983" (driver="kvm2")
	I0416 01:23:36.511270   75162 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 01:23:36.511291   75162 main.go:141] libmachine: (flannel-381983) Calling .DriverName
	I0416 01:23:36.511498   75162 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 01:23:36.511529   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHHostname
	I0416 01:23:36.513809   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:36.514119   75162 main.go:141] libmachine: (flannel-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:ec", ip: ""} in network mk-flannel-381983: {Iface:virbr4 ExpiryTime:2024-04-16 02:23:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:ec Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:flannel-381983 Clientid:01:52:54:00:ed:0e:ec}
	I0416 01:23:36.514147   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined IP address 192.168.50.155 and MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:36.514256   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHPort
	I0416 01:23:36.514452   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHKeyPath
	I0416 01:23:36.514651   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHUsername
	I0416 01:23:36.514794   75162 sshutil.go:53] new ssh client: &{IP:192.168.50.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/flannel-381983/id_rsa Username:docker}
	I0416 01:23:36.595527   75162 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 01:23:36.599835   75162 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 01:23:36.599857   75162 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 01:23:36.599908   75162 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 01:23:36.599973   75162 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 01:23:36.600053   75162 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 01:23:36.609812   75162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:23:36.638472   75162 start.go:296] duration metric: took 127.201555ms for postStartSetup
	I0416 01:23:36.638533   75162 main.go:141] libmachine: (flannel-381983) Calling .GetConfigRaw
	I0416 01:23:36.639218   75162 main.go:141] libmachine: (flannel-381983) Calling .GetIP
	I0416 01:23:36.641753   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:36.642221   75162 main.go:141] libmachine: (flannel-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:ec", ip: ""} in network mk-flannel-381983: {Iface:virbr4 ExpiryTime:2024-04-16 02:23:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:ec Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:flannel-381983 Clientid:01:52:54:00:ed:0e:ec}
	I0416 01:23:36.642251   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined IP address 192.168.50.155 and MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:36.642503   75162 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/config.json ...
	I0416 01:23:36.642705   75162 start.go:128] duration metric: took 25.036503405s to createHost
	I0416 01:23:36.642732   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHHostname
	I0416 01:23:36.644981   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:36.645369   75162 main.go:141] libmachine: (flannel-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:ec", ip: ""} in network mk-flannel-381983: {Iface:virbr4 ExpiryTime:2024-04-16 02:23:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:ec Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:flannel-381983 Clientid:01:52:54:00:ed:0e:ec}
	I0416 01:23:36.645399   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined IP address 192.168.50.155 and MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:36.645589   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHPort
	I0416 01:23:36.645756   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHKeyPath
	I0416 01:23:36.645932   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHKeyPath
	I0416 01:23:36.646099   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHUsername
	I0416 01:23:36.646301   75162 main.go:141] libmachine: Using SSH client type: native
	I0416 01:23:36.646510   75162 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.155 22 <nil> <nil>}
	I0416 01:23:36.646521   75162 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 01:23:36.749944   75162 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713230616.728796844
	
	I0416 01:23:36.749979   75162 fix.go:216] guest clock: 1713230616.728796844
	I0416 01:23:36.749989   75162 fix.go:229] Guest: 2024-04-16 01:23:36.728796844 +0000 UTC Remote: 2024-04-16 01:23:36.642719782 +0000 UTC m=+25.788084168 (delta=86.077062ms)
	I0416 01:23:36.750026   75162 fix.go:200] guest clock delta is within tolerance: 86.077062ms
	I0416 01:23:36.750031   75162 start.go:83] releasing machines lock for "flannel-381983", held for 25.143970455s
	I0416 01:23:36.750057   75162 main.go:141] libmachine: (flannel-381983) Calling .DriverName
	I0416 01:23:36.750311   75162 main.go:141] libmachine: (flannel-381983) Calling .GetIP
	I0416 01:23:36.753062   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:36.753392   75162 main.go:141] libmachine: (flannel-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:ec", ip: ""} in network mk-flannel-381983: {Iface:virbr4 ExpiryTime:2024-04-16 02:23:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:ec Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:flannel-381983 Clientid:01:52:54:00:ed:0e:ec}
	I0416 01:23:36.753413   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined IP address 192.168.50.155 and MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:36.753565   75162 main.go:141] libmachine: (flannel-381983) Calling .DriverName
	I0416 01:23:36.754183   75162 main.go:141] libmachine: (flannel-381983) Calling .DriverName
	I0416 01:23:36.754393   75162 main.go:141] libmachine: (flannel-381983) Calling .DriverName
	I0416 01:23:36.754486   75162 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 01:23:36.754519   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHHostname
	I0416 01:23:36.754775   75162 ssh_runner.go:195] Run: cat /version.json
	I0416 01:23:36.754814   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHHostname
	I0416 01:23:36.757318   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:36.757655   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:36.757690   75162 main.go:141] libmachine: (flannel-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:ec", ip: ""} in network mk-flannel-381983: {Iface:virbr4 ExpiryTime:2024-04-16 02:23:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:ec Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:flannel-381983 Clientid:01:52:54:00:ed:0e:ec}
	I0416 01:23:36.757710   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined IP address 192.168.50.155 and MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:36.757848   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHPort
	I0416 01:23:36.758038   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHKeyPath
	I0416 01:23:36.758047   75162 main.go:141] libmachine: (flannel-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:ec", ip: ""} in network mk-flannel-381983: {Iface:virbr4 ExpiryTime:2024-04-16 02:23:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:ec Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:flannel-381983 Clientid:01:52:54:00:ed:0e:ec}
	I0416 01:23:36.758070   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined IP address 192.168.50.155 and MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:36.758168   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHUsername
	I0416 01:23:36.758257   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHPort
	I0416 01:23:36.758336   75162 sshutil.go:53] new ssh client: &{IP:192.168.50.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/flannel-381983/id_rsa Username:docker}
	I0416 01:23:36.758418   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHKeyPath
	I0416 01:23:36.758567   75162 main.go:141] libmachine: (flannel-381983) Calling .GetSSHUsername
	I0416 01:23:36.758709   75162 sshutil.go:53] new ssh client: &{IP:192.168.50.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/flannel-381983/id_rsa Username:docker}
	I0416 01:23:36.882205   75162 ssh_runner.go:195] Run: systemctl --version
	I0416 01:23:36.888924   75162 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 01:23:37.055049   75162 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 01:23:37.062403   75162 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 01:23:37.062478   75162 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 01:23:37.079819   75162 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 01:23:37.079847   75162 start.go:494] detecting cgroup driver to use...
	I0416 01:23:37.079940   75162 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 01:23:37.098722   75162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 01:23:37.114968   75162 docker.go:217] disabling cri-docker service (if available) ...
	I0416 01:23:37.115032   75162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 01:23:37.130400   75162 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 01:23:37.149255   75162 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 01:23:37.269213   75162 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:23:37.411802   75162 docker.go:233] disabling docker service ...
	I0416 01:23:37.411869   75162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:23:37.434708   75162 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:23:37.450115   75162 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:23:37.601408   75162 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:23:37.740462   75162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:23:37.755613   75162 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:23:37.774891   75162 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 01:23:37.774949   75162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:23:37.786008   75162 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:23:37.786072   75162 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:23:37.796616   75162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:23:37.807394   75162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:23:37.818792   75162 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:23:37.830948   75162 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:23:37.842193   75162 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:23:37.860954   75162 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:23:37.871976   75162 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:23:37.882249   75162 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:23:37.882317   75162 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:23:37.896152   75162 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:23:37.907638   75162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:23:38.033568   75162 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:23:38.183616   75162 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:23:38.183690   75162 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:23:38.189019   75162 start.go:562] Will wait 60s for crictl version
	I0416 01:23:38.189089   75162 ssh_runner.go:195] Run: which crictl
	I0416 01:23:38.193190   75162 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:23:38.241858   75162 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:23:38.241938   75162 ssh_runner.go:195] Run: crio --version
	I0416 01:23:38.273131   75162 ssh_runner.go:195] Run: crio --version
	I0416 01:23:38.304414   75162 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 01:23:34.402297   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:34.903063   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:35.402687   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:35.902488   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:36.403197   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:36.902785   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:37.402345   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:37.902635   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:38.403083   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:38.903116   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:38.305750   75162 main.go:141] libmachine: (flannel-381983) Calling .GetIP
	I0416 01:23:38.308237   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:38.308574   75162 main.go:141] libmachine: (flannel-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:0e:ec", ip: ""} in network mk-flannel-381983: {Iface:virbr4 ExpiryTime:2024-04-16 02:23:28 +0000 UTC Type:0 Mac:52:54:00:ed:0e:ec Iaid: IPaddr:192.168.50.155 Prefix:24 Hostname:flannel-381983 Clientid:01:52:54:00:ed:0e:ec}
	I0416 01:23:38.308594   75162 main.go:141] libmachine: (flannel-381983) DBG | domain flannel-381983 has defined IP address 192.168.50.155 and MAC address 52:54:00:ed:0e:ec in network mk-flannel-381983
	I0416 01:23:38.308823   75162 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0416 01:23:38.313319   75162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:23:38.326934   75162 kubeadm.go:877] updating cluster {Name:flannel-381983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:flannel-381983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.50.155 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:23:38.327074   75162 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 01:23:38.327153   75162 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:23:38.373232   75162 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 01:23:38.373317   75162 ssh_runner.go:195] Run: which lz4
	I0416 01:23:38.378159   75162 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:23:38.382658   75162 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:23:38.382691   75162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 01:23:39.966109   75162 crio.go:462] duration metric: took 1.587978136s to copy over tarball
	I0416 01:23:39.966223   75162 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:23:36.473425   72357 pod_ready.go:102] pod "coredns-76f75df574-54x8q" in "kube-system" namespace has status "Ready":"False"
	I0416 01:23:38.978970   72357 pod_ready.go:102] pod "coredns-76f75df574-54x8q" in "kube-system" namespace has status "Ready":"False"
	I0416 01:23:39.402921   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:39.902329   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:40.403265   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:40.902352   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:41.403198   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:41.902854   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:42.402365   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:42.903246   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:43.402252   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:43.902539   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:44.402364   73348 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:23:45.393751   73348 kubeadm.go:1107] duration metric: took 14.333148038s to wait for elevateKubeSystemPrivileges
	W0416 01:23:45.393800   73348 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:23:45.393809   73348 kubeadm.go:393] duration metric: took 26.074727751s to StartCluster
	I0416 01:23:45.393828   73348 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:23:45.393914   73348 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:23:45.396460   73348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:23:45.396725   73348 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.72.44 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:23:45.398434   73348 out.go:177] * Verifying Kubernetes components...
	I0416 01:23:45.396831   73348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0416 01:23:45.396848   73348 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:23:45.397071   73348 config.go:182] Loaded profile config "enable-default-cni-381983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:23:45.400052   73348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:23:45.400087   73348 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-381983"
	I0416 01:23:45.400133   73348 addons.go:234] Setting addon storage-provisioner=true in "enable-default-cni-381983"
	I0416 01:23:45.400169   73348 host.go:66] Checking if "enable-default-cni-381983" exists ...
	I0416 01:23:45.400176   73348 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-381983"
	I0416 01:23:45.400215   73348 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-381983"
	I0416 01:23:45.400617   73348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:23:45.400643   73348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:23:45.400652   73348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:23:45.400670   73348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:23:45.421290   73348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43483
	I0416 01:23:45.421366   73348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39373
	I0416 01:23:45.421861   73348 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:23:45.422222   73348 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:23:45.422376   73348 main.go:141] libmachine: Using API Version  1
	I0416 01:23:45.422392   73348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:23:45.422844   73348 main.go:141] libmachine: Using API Version  1
	I0416 01:23:45.422862   73348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:23:45.422935   73348 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:23:45.423116   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetState
	I0416 01:23:45.423192   73348 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:23:45.423746   73348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:23:45.423787   73348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:23:45.427714   73348 addons.go:234] Setting addon default-storageclass=true in "enable-default-cni-381983"
	I0416 01:23:45.427758   73348 host.go:66] Checking if "enable-default-cni-381983" exists ...
	I0416 01:23:45.428151   73348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:23:45.428181   73348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:23:45.441079   73348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36121
	I0416 01:23:45.441678   73348 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:23:45.442177   73348 main.go:141] libmachine: Using API Version  1
	I0416 01:23:45.442204   73348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:23:45.442522   73348 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:23:45.442677   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetState
	I0416 01:23:45.444518   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .DriverName
	I0416 01:23:45.448357   73348 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:23:42.603799   75162 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.637537913s)
	I0416 01:23:42.603822   75162 crio.go:469] duration metric: took 2.637680436s to extract the tarball
	I0416 01:23:42.603828   75162 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:23:42.644852   75162 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:23:42.688344   75162 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 01:23:42.688368   75162 cache_images.go:84] Images are preloaded, skipping loading
	I0416 01:23:42.688375   75162 kubeadm.go:928] updating node { 192.168.50.155 8443 v1.29.3 crio true true} ...
	I0416 01:23:42.688462   75162 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=flannel-381983 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.155
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:flannel-381983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel}
	I0416 01:23:42.688546   75162 ssh_runner.go:195] Run: crio config
	I0416 01:23:42.741754   75162 cni.go:84] Creating CNI manager for "flannel"
	I0416 01:23:42.741786   75162 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:23:42.741814   75162 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.155 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:flannel-381983 NodeName:flannel-381983 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.155"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.155 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 01:23:42.742004   75162 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.155
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "flannel-381983"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.155
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.155"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:23:42.742081   75162 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 01:23:42.753510   75162 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:23:42.753580   75162 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:23:42.763600   75162 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0416 01:23:42.781087   75162 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:23:42.801892   75162 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0416 01:23:42.821319   75162 ssh_runner.go:195] Run: grep 192.168.50.155	control-plane.minikube.internal$ /etc/hosts
	I0416 01:23:42.825511   75162 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.155	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:23:42.838583   75162 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:23:42.982349   75162 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:23:43.005569   75162 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983 for IP: 192.168.50.155
	I0416 01:23:43.005607   75162 certs.go:194] generating shared ca certs ...
	I0416 01:23:43.005627   75162 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:23:43.005798   75162 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:23:43.005858   75162 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:23:43.005872   75162 certs.go:256] generating profile certs ...
	I0416 01:23:43.005948   75162 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/client.key
	I0416 01:23:43.005966   75162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/client.crt with IP's: []
	I0416 01:23:43.082872   75162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/client.crt ...
	I0416 01:23:43.082911   75162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/client.crt: {Name:mkee1f97ab9363ca9ced1278d883e20d14ba6cb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:23:43.083128   75162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/client.key ...
	I0416 01:23:43.083159   75162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/client.key: {Name:mk8824ab35ee0d6edc411b6268c6e953836c05a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:23:43.083276   75162 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/apiserver.key.094b7daa
	I0416 01:23:43.083296   75162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/apiserver.crt.094b7daa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.155]
	I0416 01:23:43.233782   75162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/apiserver.crt.094b7daa ...
	I0416 01:23:43.233819   75162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/apiserver.crt.094b7daa: {Name:mkbfb7cc81c3692f537271f4e90a49f5f2157022 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:23:43.234038   75162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/apiserver.key.094b7daa ...
	I0416 01:23:43.234061   75162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/apiserver.key.094b7daa: {Name:mk7ba2a0f97049db1fb2d81584b94d47b92cfb27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:23:43.234176   75162 certs.go:381] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/apiserver.crt.094b7daa -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/apiserver.crt
	I0416 01:23:43.234312   75162 certs.go:385] copying /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/apiserver.key.094b7daa -> /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/apiserver.key
	I0416 01:23:43.234402   75162 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/proxy-client.key
	I0416 01:23:43.234422   75162 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/proxy-client.crt with IP's: []
	I0416 01:23:43.317387   75162 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/proxy-client.crt ...
	I0416 01:23:43.317414   75162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/proxy-client.crt: {Name:mkf097815139192500913573e5a1c38db0f8998c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:23:43.317590   75162 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/proxy-client.key ...
	I0416 01:23:43.317609   75162 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/proxy-client.key: {Name:mkde1bbb39ef599a209b87032ebe8a55d369761b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:23:43.317829   75162 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:23:43.317872   75162 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:23:43.317893   75162 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:23:43.317924   75162 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:23:43.317974   75162 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:23:43.318005   75162 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:23:43.318061   75162 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:23:43.318642   75162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:23:43.355878   75162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:23:43.386166   75162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:23:43.417097   75162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:23:43.453009   75162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0416 01:23:43.581968   75162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 01:23:43.611985   75162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:23:43.718421   75162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/flannel-381983/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 01:23:43.772866   75162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:23:43.822123   75162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:23:43.851575   75162 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:23:43.880903   75162 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:23:43.899856   75162 ssh_runner.go:195] Run: openssl version
	I0416 01:23:43.906683   75162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:23:43.918648   75162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:23:43.923811   75162 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:23:43.923862   75162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:23:43.930164   75162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:23:43.943899   75162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:23:43.956532   75162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:23:43.961746   75162 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:23:43.961805   75162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:23:43.968339   75162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:23:43.980861   75162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:23:43.996814   75162 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:23:44.002208   75162 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:23:44.002258   75162 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:23:44.010150   75162 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:23:44.022253   75162 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:23:44.026927   75162 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0416 01:23:44.026993   75162 kubeadm.go:391] StartCluster: {Name:flannel-381983 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3
ClusterName:flannel-381983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP:192.168.50.155 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:23:44.027094   75162 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:23:44.027186   75162 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:23:44.072965   75162 cri.go:89] found id: ""
	I0416 01:23:44.073046   75162 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0416 01:23:44.084566   75162 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:23:44.096285   75162 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:23:44.108027   75162 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:23:44.108051   75162 kubeadm.go:156] found existing configuration files:
	
	I0416 01:23:44.108102   75162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:23:44.118598   75162 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:23:44.118680   75162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:23:44.130338   75162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:23:44.141548   75162 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:23:44.141616   75162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:23:44.153045   75162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:23:44.162651   75162 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:23:44.162726   75162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:23:44.173829   75162 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:23:44.184245   75162 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:23:44.184328   75162 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:23:44.194760   75162 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:23:44.405243   75162 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:23:41.472252   72357 pod_ready.go:102] pod "coredns-76f75df574-54x8q" in "kube-system" namespace has status "Ready":"False"
	I0416 01:23:43.572258   72357 pod_ready.go:102] pod "coredns-76f75df574-54x8q" in "kube-system" namespace has status "Ready":"False"
	I0416 01:23:45.473754   72357 pod_ready.go:92] pod "coredns-76f75df574-54x8q" in "kube-system" namespace has status "Ready":"True"
	I0416 01:23:45.473787   72357 pod_ready.go:81] duration metric: took 18.008742414s for pod "coredns-76f75df574-54x8q" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:45.473803   72357 pod_ready.go:78] waiting up to 15m0s for pod "etcd-custom-flannel-381983" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:45.487160   72357 pod_ready.go:92] pod "etcd-custom-flannel-381983" in "kube-system" namespace has status "Ready":"True"
	I0416 01:23:45.487182   72357 pod_ready.go:81] duration metric: took 13.368911ms for pod "etcd-custom-flannel-381983" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:45.487191   72357 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-381983" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:45.495203   72357 pod_ready.go:92] pod "kube-apiserver-custom-flannel-381983" in "kube-system" namespace has status "Ready":"True"
	I0416 01:23:45.495236   72357 pod_ready.go:81] duration metric: took 8.03697ms for pod "kube-apiserver-custom-flannel-381983" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:45.495249   72357 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-381983" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:45.500917   72357 pod_ready.go:92] pod "kube-controller-manager-custom-flannel-381983" in "kube-system" namespace has status "Ready":"True"
	I0416 01:23:45.500935   72357 pod_ready.go:81] duration metric: took 5.678199ms for pod "kube-controller-manager-custom-flannel-381983" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:45.500943   72357 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-nbgnl" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:45.587452   72357 pod_ready.go:92] pod "kube-proxy-nbgnl" in "kube-system" namespace has status "Ready":"True"
	I0416 01:23:45.587477   72357 pod_ready.go:81] duration metric: took 86.527614ms for pod "kube-proxy-nbgnl" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:45.587487   72357 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-381983" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:45.987585   72357 pod_ready.go:92] pod "kube-scheduler-custom-flannel-381983" in "kube-system" namespace has status "Ready":"True"
	I0416 01:23:45.987667   72357 pod_ready.go:81] duration metric: took 400.171902ms for pod "kube-scheduler-custom-flannel-381983" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:45.987688   72357 pod_ready.go:38] duration metric: took 18.533543194s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:23:45.987705   72357 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:23:45.987786   72357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:23:46.007794   72357 api_server.go:72] duration metric: took 28.146570823s to wait for apiserver process to appear ...
	I0416 01:23:46.007822   72357 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:23:46.007848   72357 api_server.go:253] Checking apiserver healthz at https://192.168.39.168:8443/healthz ...
	I0416 01:23:46.012972   72357 api_server.go:279] https://192.168.39.168:8443/healthz returned 200:
	ok
	I0416 01:23:46.014901   72357 api_server.go:141] control plane version: v1.29.3
	I0416 01:23:46.014927   72357 api_server.go:131] duration metric: took 7.097019ms to wait for apiserver health ...
	I0416 01:23:46.014936   72357 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:23:46.191711   72357 system_pods.go:59] 7 kube-system pods found
	I0416 01:23:46.191743   72357 system_pods.go:61] "coredns-76f75df574-54x8q" [41c9136d-cabd-4635-b956-b90e4091f157] Running
	I0416 01:23:46.191750   72357 system_pods.go:61] "etcd-custom-flannel-381983" [060a9da4-c67c-40a4-bb89-afdfed0fc16e] Running
	I0416 01:23:46.191756   72357 system_pods.go:61] "kube-apiserver-custom-flannel-381983" [df4f8894-be54-4f84-b620-f640b235d084] Running
	I0416 01:23:46.191760   72357 system_pods.go:61] "kube-controller-manager-custom-flannel-381983" [b9a6feac-3787-405b-b619-a841e9d1c73d] Running
	I0416 01:23:46.191764   72357 system_pods.go:61] "kube-proxy-nbgnl" [7011f7b3-a00c-4a3a-b525-75d72e2bbcb8] Running
	I0416 01:23:46.191769   72357 system_pods.go:61] "kube-scheduler-custom-flannel-381983" [793d194a-2a32-43cf-a17a-b05f2e7eea0b] Running
	I0416 01:23:46.191774   72357 system_pods.go:61] "storage-provisioner" [aee94e7e-3e09-41bb-903a-66aa2e1e65d0] Running
	I0416 01:23:46.191781   72357 system_pods.go:74] duration metric: took 176.839013ms to wait for pod list to return data ...
	I0416 01:23:46.191798   72357 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:23:46.387440   72357 default_sa.go:45] found service account: "default"
	I0416 01:23:46.387472   72357 default_sa.go:55] duration metric: took 195.665851ms for default service account to be created ...
	I0416 01:23:46.387490   72357 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:23:46.590999   72357 system_pods.go:86] 7 kube-system pods found
	I0416 01:23:46.591039   72357 system_pods.go:89] "coredns-76f75df574-54x8q" [41c9136d-cabd-4635-b956-b90e4091f157] Running
	I0416 01:23:46.591047   72357 system_pods.go:89] "etcd-custom-flannel-381983" [060a9da4-c67c-40a4-bb89-afdfed0fc16e] Running
	I0416 01:23:46.591053   72357 system_pods.go:89] "kube-apiserver-custom-flannel-381983" [df4f8894-be54-4f84-b620-f640b235d084] Running
	I0416 01:23:46.591058   72357 system_pods.go:89] "kube-controller-manager-custom-flannel-381983" [b9a6feac-3787-405b-b619-a841e9d1c73d] Running
	I0416 01:23:46.591064   72357 system_pods.go:89] "kube-proxy-nbgnl" [7011f7b3-a00c-4a3a-b525-75d72e2bbcb8] Running
	I0416 01:23:46.591069   72357 system_pods.go:89] "kube-scheduler-custom-flannel-381983" [793d194a-2a32-43cf-a17a-b05f2e7eea0b] Running
	I0416 01:23:46.591074   72357 system_pods.go:89] "storage-provisioner" [aee94e7e-3e09-41bb-903a-66aa2e1e65d0] Running
	I0416 01:23:46.591084   72357 system_pods.go:126] duration metric: took 203.585598ms to wait for k8s-apps to be running ...
	I0416 01:23:46.591102   72357 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:23:46.591152   72357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:23:46.610927   72357 system_svc.go:56] duration metric: took 19.815752ms WaitForService to wait for kubelet
	I0416 01:23:46.610959   72357 kubeadm.go:576] duration metric: took 28.74973909s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:23:46.610982   72357 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:23:46.788351   72357 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:23:46.788379   72357 node_conditions.go:123] node cpu capacity is 2
	I0416 01:23:46.788390   72357 node_conditions.go:105] duration metric: took 177.402971ms to run NodePressure ...
	I0416 01:23:46.788400   72357 start.go:240] waiting for startup goroutines ...
	I0416 01:23:46.788407   72357 start.go:245] waiting for cluster config update ...
	I0416 01:23:46.788415   72357 start.go:254] writing updated cluster config ...
	I0416 01:23:46.788647   72357 ssh_runner.go:195] Run: rm -f paused
	I0416 01:23:46.846320   72357 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 01:23:46.871657   72357 out.go:177] * Done! kubectl is now configured to use "custom-flannel-381983" cluster and "default" namespace by default
	I0416 01:23:45.449226   73348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33311
	I0416 01:23:45.449822   73348 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:23:45.449838   73348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:23:45.449857   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHHostname
	I0416 01:23:45.450510   73348 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:23:45.451021   73348 main.go:141] libmachine: Using API Version  1
	I0416 01:23:45.451045   73348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:23:45.451519   73348 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:23:45.452124   73348 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:23:45.452161   73348 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:23:45.453647   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:45.454261   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:3e:71", ip: ""} in network mk-enable-default-cni-381983: {Iface:virbr3 ExpiryTime:2024-04-16 02:23:02 +0000 UTC Type:0 Mac:52:54:00:d1:3e:71 Iaid: IPaddr:192.168.72.44 Prefix:24 Hostname:enable-default-cni-381983 Clientid:01:52:54:00:d1:3e:71}
	I0416 01:23:45.454305   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:45.454358   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHPort
	I0416 01:23:45.454553   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHKeyPath
	I0416 01:23:45.454708   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHUsername
	I0416 01:23:45.454840   73348 sshutil.go:53] new ssh client: &{IP:192.168.72.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/enable-default-cni-381983/id_rsa Username:docker}
	I0416 01:23:45.469271   73348 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37877
	I0416 01:23:45.469723   73348 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:23:45.470231   73348 main.go:141] libmachine: Using API Version  1
	I0416 01:23:45.470251   73348 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:23:45.470588   73348 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:23:45.470706   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetState
	I0416 01:23:45.472478   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .DriverName
	I0416 01:23:45.472785   73348 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:23:45.472806   73348 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:23:45.472831   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHHostname
	I0416 01:23:45.475697   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:45.476224   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:3e:71", ip: ""} in network mk-enable-default-cni-381983: {Iface:virbr3 ExpiryTime:2024-04-16 02:23:02 +0000 UTC Type:0 Mac:52:54:00:d1:3e:71 Iaid: IPaddr:192.168.72.44 Prefix:24 Hostname:enable-default-cni-381983 Clientid:01:52:54:00:d1:3e:71}
	I0416 01:23:45.476343   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | domain enable-default-cni-381983 has defined IP address 192.168.72.44 and MAC address 52:54:00:d1:3e:71 in network mk-enable-default-cni-381983
	I0416 01:23:45.476698   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHPort
	I0416 01:23:45.476886   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHKeyPath
	I0416 01:23:45.477055   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .GetSSHUsername
	I0416 01:23:45.477356   73348 sshutil.go:53] new ssh client: &{IP:192.168.72.44 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/enable-default-cni-381983/id_rsa Username:docker}
	I0416 01:23:45.752346   73348 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:23:45.752488   73348 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0416 01:23:45.796984   73348 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-381983" to be "Ready" ...
	I0416 01:23:45.855688   73348 node_ready.go:49] node "enable-default-cni-381983" has status "Ready":"True"
	I0416 01:23:45.855717   73348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:23:45.855720   73348 node_ready.go:38] duration metric: took 58.706709ms for node "enable-default-cni-381983" to be "Ready" ...
	I0416 01:23:45.855732   73348 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:23:45.871834   73348 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:23:45.904175   73348 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-5bwng" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:46.323286   73348 start.go:946] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0416 01:23:46.323328   73348 main.go:141] libmachine: Making call to close driver server
	I0416 01:23:46.323352   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .Close
	I0416 01:23:46.323664   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | Closing plugin on server side
	I0416 01:23:46.323702   73348 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:23:46.323718   73348 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:23:46.323731   73348 main.go:141] libmachine: Making call to close driver server
	I0416 01:23:46.323758   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .Close
	I0416 01:23:46.323993   73348 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:23:46.324015   73348 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:23:46.343925   73348 main.go:141] libmachine: Making call to close driver server
	I0416 01:23:46.343949   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .Close
	I0416 01:23:46.344212   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | Closing plugin on server side
	I0416 01:23:46.344253   73348 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:23:46.344269   73348 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:23:46.874491   73348 kapi.go:248] "coredns" deployment in "kube-system" namespace and "enable-default-cni-381983" context rescaled to 1 replicas
	I0416 01:23:46.878796   73348 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.006925614s)
	I0416 01:23:46.878850   73348 main.go:141] libmachine: Making call to close driver server
	I0416 01:23:46.878863   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .Close
	I0416 01:23:46.879148   73348 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:23:46.879168   73348 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:23:46.879171   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | Closing plugin on server side
	I0416 01:23:46.879178   73348 main.go:141] libmachine: Making call to close driver server
	I0416 01:23:46.879191   73348 main.go:141] libmachine: (enable-default-cni-381983) Calling .Close
	I0416 01:23:46.879540   73348 main.go:141] libmachine: (enable-default-cni-381983) DBG | Closing plugin on server side
	I0416 01:23:46.879566   73348 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:23:46.879576   73348 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:23:46.975307   73348 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0416 01:23:47.039322   73348 addons.go:505] duration metric: took 1.642458905s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0416 01:23:47.911415   73348 pod_ready.go:102] pod "coredns-76f75df574-5bwng" in "kube-system" namespace has status "Ready":"False"
	I0416 01:23:48.407640   73348 pod_ready.go:97] error getting pod "coredns-76f75df574-5bwng" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-5bwng" not found
	I0416 01:23:48.407668   73348 pod_ready.go:81] duration metric: took 2.503460117s for pod "coredns-76f75df574-5bwng" in "kube-system" namespace to be "Ready" ...
	E0416 01:23:48.407680   73348 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-76f75df574-5bwng" in "kube-system" namespace (skipping!): pods "coredns-76f75df574-5bwng" not found
	I0416 01:23:48.407689   73348 pod_ready.go:78] waiting up to 15m0s for pod "coredns-76f75df574-gmfkl" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:48.413400   73348 pod_ready.go:92] pod "coredns-76f75df574-gmfkl" in "kube-system" namespace has status "Ready":"True"
	I0416 01:23:48.413434   73348 pod_ready.go:81] duration metric: took 5.736328ms for pod "coredns-76f75df574-gmfkl" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:48.413447   73348 pod_ready.go:78] waiting up to 15m0s for pod "etcd-enable-default-cni-381983" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:48.420058   73348 pod_ready.go:92] pod "etcd-enable-default-cni-381983" in "kube-system" namespace has status "Ready":"True"
	I0416 01:23:48.420082   73348 pod_ready.go:81] duration metric: took 6.627274ms for pod "etcd-enable-default-cni-381983" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:48.420093   73348 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-381983" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:48.426067   73348 pod_ready.go:92] pod "kube-apiserver-enable-default-cni-381983" in "kube-system" namespace has status "Ready":"True"
	I0416 01:23:48.426087   73348 pod_ready.go:81] duration metric: took 5.987592ms for pod "kube-apiserver-enable-default-cni-381983" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:48.426094   73348 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-381983" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:48.431607   73348 pod_ready.go:92] pod "kube-controller-manager-enable-default-cni-381983" in "kube-system" namespace has status "Ready":"True"
	I0416 01:23:48.431633   73348 pod_ready.go:81] duration metric: took 5.531497ms for pod "kube-controller-manager-enable-default-cni-381983" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:48.431645   73348 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-5k6xp" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:48.610356   73348 pod_ready.go:92] pod "kube-proxy-5k6xp" in "kube-system" namespace has status "Ready":"True"
	I0416 01:23:48.610385   73348 pod_ready.go:81] duration metric: took 178.731547ms for pod "kube-proxy-5k6xp" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:48.610397   73348 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-381983" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:49.007968   73348 pod_ready.go:92] pod "kube-scheduler-enable-default-cni-381983" in "kube-system" namespace has status "Ready":"True"
	I0416 01:23:49.007994   73348 pod_ready.go:81] duration metric: took 397.588611ms for pod "kube-scheduler-enable-default-cni-381983" in "kube-system" namespace to be "Ready" ...
	I0416 01:23:49.008002   73348 pod_ready.go:38] duration metric: took 3.152258285s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:23:49.008016   73348 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:23:49.008078   73348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:23:49.024133   73348 api_server.go:72] duration metric: took 3.627367836s to wait for apiserver process to appear ...
	I0416 01:23:49.024162   73348 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:23:49.024183   73348 api_server.go:253] Checking apiserver healthz at https://192.168.72.44:8443/healthz ...
	I0416 01:23:49.031444   73348 api_server.go:279] https://192.168.72.44:8443/healthz returned 200:
	ok
	I0416 01:23:49.034370   73348 api_server.go:141] control plane version: v1.29.3
	I0416 01:23:49.034394   73348 api_server.go:131] duration metric: took 10.225736ms to wait for apiserver health ...
	I0416 01:23:49.034401   73348 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:23:49.223624   73348 system_pods.go:59] 7 kube-system pods found
	I0416 01:23:49.223676   73348 system_pods.go:61] "coredns-76f75df574-gmfkl" [5748f138-defe-4dde-847d-9cd4f16d76b2] Running
	I0416 01:23:49.223686   73348 system_pods.go:61] "etcd-enable-default-cni-381983" [0b1dc0b8-a0e6-47ad-939c-2ee13daffe39] Running
	I0416 01:23:49.223693   73348 system_pods.go:61] "kube-apiserver-enable-default-cni-381983" [2217702b-07ce-42e2-ba5b-ee2223e15f3e] Running
	I0416 01:23:49.223699   73348 system_pods.go:61] "kube-controller-manager-enable-default-cni-381983" [0b6ba52c-c667-4448-aecb-c4b42ec22e7d] Running
	I0416 01:23:49.223704   73348 system_pods.go:61] "kube-proxy-5k6xp" [3922d465-8e60-49af-870a-8f0e3c11a198] Running
	I0416 01:23:49.223709   73348 system_pods.go:61] "kube-scheduler-enable-default-cni-381983" [caa40208-9f08-4685-8c0f-7a837fd7eb63] Running
	I0416 01:23:49.223714   73348 system_pods.go:61] "storage-provisioner" [2164f519-7391-4ff3-ae52-1e0fc4918091] Running
	I0416 01:23:49.223721   73348 system_pods.go:74] duration metric: took 189.314806ms to wait for pod list to return data ...
	I0416 01:23:49.223731   73348 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:23:49.408831   73348 default_sa.go:45] found service account: "default"
	I0416 01:23:49.408858   73348 default_sa.go:55] duration metric: took 185.121274ms for default service account to be created ...
	I0416 01:23:49.408867   73348 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:23:49.614487   73348 system_pods.go:86] 7 kube-system pods found
	I0416 01:23:49.614517   73348 system_pods.go:89] "coredns-76f75df574-gmfkl" [5748f138-defe-4dde-847d-9cd4f16d76b2] Running
	I0416 01:23:49.614525   73348 system_pods.go:89] "etcd-enable-default-cni-381983" [0b1dc0b8-a0e6-47ad-939c-2ee13daffe39] Running
	I0416 01:23:49.614533   73348 system_pods.go:89] "kube-apiserver-enable-default-cni-381983" [2217702b-07ce-42e2-ba5b-ee2223e15f3e] Running
	I0416 01:23:49.614539   73348 system_pods.go:89] "kube-controller-manager-enable-default-cni-381983" [0b6ba52c-c667-4448-aecb-c4b42ec22e7d] Running
	I0416 01:23:49.614545   73348 system_pods.go:89] "kube-proxy-5k6xp" [3922d465-8e60-49af-870a-8f0e3c11a198] Running
	I0416 01:23:49.614555   73348 system_pods.go:89] "kube-scheduler-enable-default-cni-381983" [caa40208-9f08-4685-8c0f-7a837fd7eb63] Running
	I0416 01:23:49.614567   73348 system_pods.go:89] "storage-provisioner" [2164f519-7391-4ff3-ae52-1e0fc4918091] Running
	I0416 01:23:49.614577   73348 system_pods.go:126] duration metric: took 205.70349ms to wait for k8s-apps to be running ...
	I0416 01:23:49.614585   73348 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:23:49.614633   73348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:23:49.657838   73348 system_svc.go:56] duration metric: took 43.243733ms WaitForService to wait for kubelet
	I0416 01:23:49.657872   73348 kubeadm.go:576] duration metric: took 4.261109996s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:23:49.657896   73348 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:23:49.809043   73348 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:23:49.809075   73348 node_conditions.go:123] node cpu capacity is 2
	I0416 01:23:49.809093   73348 node_conditions.go:105] duration metric: took 151.18947ms to run NodePressure ...
	I0416 01:23:49.809107   73348 start.go:240] waiting for startup goroutines ...
	I0416 01:23:49.809114   73348 start.go:245] waiting for cluster config update ...
	I0416 01:23:49.809123   73348 start.go:254] writing updated cluster config ...
	I0416 01:23:49.809386   73348 ssh_runner.go:195] Run: rm -f paused
	I0416 01:23:49.891238   73348 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 01:23:49.893321   73348 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-381983" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.654379428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8300dd6c-872b-45c3-981d-85c8e86e3340 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.655933408Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5f955599-8741-4707-8f45-c38be30927c6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.656562527Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230630656538857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5f955599-8741-4707-8f45-c38be30927c6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.657445796Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f2207c79-e20d-454f-a69e-40caddadb9ba name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.657580205Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f2207c79-e20d-454f-a69e-40caddadb9ba name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.657831880Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4572e9cdc29a024c0d665ea3122bbf4ee193ce62bec3d6fca7a84a3da8eea5d,PodSandboxId:ca0de572a57af289b5353e09c08fe81ebce442abf756b5868f360761282894ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229543272787433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a62c0f7-0b15-48f3-9c17-d5966d39fbd5,},Annotations:map[string]string{io.kubernetes.container.hash: f0fa9704,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74a3cc406377588df21cbb51972b33a94ceab9b1b709bf57ff2d56c7d603bc0,PodSandboxId:50a34e4f6bf31ec69fcc63e1c7df992f38dce685025056abd39be373db88db27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713229543173787577,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4rh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42041028-d085-4ec4-8213-da3af0d5290e,},Annotations:map[string]string{io.kubernetes.container.hash: 895cdccf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaab3d6a27de7dc9412e8a6f0abc5e141962215eedfcf7197ee943e73841af99,PodSandboxId:e018f904305989f1ff1d93597179b4da15f71f64549bbfeba8a039ff003c7256,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229542630118009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2q58l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b9d000-738b-4110-8757-17f76197285c,},Annotations:map[string]string{io.kubernetes.container.hash: 53bd1eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8ea56f2c871eedf837c4dfbcf307e3cfefa4bab9fffb2613a7f0bad633a078,PodSandboxId:dba64b21492824747924f5f579e60784f329ad8168e49a1cc96794b5714a926f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229542484974548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h8k4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b114848-1137-4215-a966-03db39e4de2
3,},Annotations:map[string]string{io.kubernetes.container.hash: 497ba5dc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f819e84c3f08a4c2272bb2247c65424f82f07babbba6f7fd68910737a5953cf,PodSandboxId:9843f30af44e8e3a5fa255d77f9a935203e21b0222351e47e9d585dc2502b2b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713229522703278815,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3908ca4b3abece521c999b86b56464ea,},Annotations:map[string]string{io.kubernetes.container.hash: 696a4a67,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6569527c88ca91f8d9ea6edee36c9ea96e78bc7d69793f16c09d414891161c2d,PodSandboxId:545a1d31fa95190f89c49c1ae83662055277e377003dad0deee6494cb4f5197b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713229522682071161,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5fd98642218f1bf3a202b613a4a213c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51303ba689e3975c0dd24bfae353857af5481682a7b169f7619c6551935453c9,PodSandboxId:33f94eee1c600853c87144dfa6c4bc116904080ef784c545280202335d9b5391,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713229522606096489,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01546522de12db89593554cc2fff4a64,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e66ec3e722b10292730106f83fdc1422f913525d80890e068ee2fcb28cb206,PodSandboxId:4fe0b00952f395f31444782dd4149d96c803fb591738f75e76802e150bf0acd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713229522577254165,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856b93b30da13b56956630be0aa3ea75,},Annotations:map[string]string{io.kubernetes.container.hash: e92b43ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2cd3bd95b7310f36c549766f75579e6906e2f74ed2283dadddd6e622ac6952,PodSandboxId:2f0e6d5deddfe9330efaaeb1ffd8fcdd0691c1c1cc1ff3bc7f4501fb6edc7fd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713229229301969967,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856b93b30da13b56956630be0aa3ea75,},Annotations:map[string]string{io.kubernetes.container.hash: e92b43ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f2207c79-e20d-454f-a69e-40caddadb9ba name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.671868885Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=1f91371d-a657-48e9-a8fa-eec1f34efc5d name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.672202907Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:206921006b08f73bf864c9888c47e150a70986d5dc3fc428980969085cbe9431,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-j5clp,Uid:99808b2d-344f-43b7-a29c-01f0a2026aa8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713229543189028214,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-j5clp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 99808b2d-344f-43b7-a29c-01f0a2026aa8,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T01:05:42.878672289Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ca0de572a57af289b5353e09c08fe81ebce442abf756b5868f360761282894ff,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:5a62c0f7-0b15-48f3-9c17-d5966d39fbd5,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713229543028640644,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a62c0f7-0b15-48f3-9c17-d5966d39fbd5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-04-16T01:05:42.715092952Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:50a34e4f6bf31ec69fcc63e1c7df992f38dce685025056abd39be373db88db27,Metadata:&PodSandboxMetadata{Name:kube-proxy-p4rh9,Uid:42041028-d085-4ec4-8213-da3af0d5290e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713229542787652484,Labels:map[string]string{controller-revision-hash: 7659797656,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-p4rh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42041028-d085-4ec4-8213-da3af0d5290e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T01:05:40.968289188Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e018f904305989f1ff1d93597179b4da15f71f64549bbfeba8a039ff003c7256,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-2q58l,Ui
d:e9b9d000-738b-4110-8757-17f76197285c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713229541918990631,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-2q58l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b9d000-738b-4110-8757-17f76197285c,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T01:05:41.606915055Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dba64b21492824747924f5f579e60784f329ad8168e49a1cc96794b5714a926f,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-h8k4k,Uid:1b114848-1137-4215-a966-03db39e4de23,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713229541874084780,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-h8k4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b114848-1137-4215-a966-03db39e4de23,k8s-app: kube-dns,pod-templa
te-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-04-16T01:05:41.563668005Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4fe0b00952f395f31444782dd4149d96c803fb591738f75e76802e150bf0acd4,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-617092,Uid:856b93b30da13b56956630be0aa3ea75,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1713229522404782763,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856b93b30da13b56956630be0aa3ea75,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.225:8443,kubernetes.io/config.hash: 856b93b30da13b56956630be0aa3ea75,kubernetes.io/config.seen: 2024-04-16T01:05:21.964043147Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:545a1d31fa95190f89c49c1ae836
62055277e377003dad0deee6494cb4f5197b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-617092,Uid:f5fd98642218f1bf3a202b613a4a213c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713229522398722955,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5fd98642218f1bf3a202b613a4a213c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f5fd98642218f1bf3a202b613a4a213c,kubernetes.io/config.seen: 2024-04-16T01:05:21.964044427Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9843f30af44e8e3a5fa255d77f9a935203e21b0222351e47e9d585dc2502b2b8,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-617092,Uid:3908ca4b3abece521c999b86b56464ea,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713229522395378277,Labels:map[string]string{component: etcd,io.kube
rnetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3908ca4b3abece521c999b86b56464ea,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.225:2379,kubernetes.io/config.hash: 3908ca4b3abece521c999b86b56464ea,kubernetes.io/config.seen: 2024-04-16T01:05:21.964039331Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:33f94eee1c600853c87144dfa6c4bc116904080ef784c545280202335d9b5391,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-617092,Uid:01546522de12db89593554cc2fff4a64,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1713229522388459948,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01546522de12db89593554cc2fff4a64,tier: control-plane,},Annotations:map[str
ing]string{kubernetes.io/config.hash: 01546522de12db89593554cc2fff4a64,kubernetes.io/config.seen: 2024-04-16T01:05:21.964045317Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2f0e6d5deddfe9330efaaeb1ffd8fcdd0691c1c1cc1ff3bc7f4501fb6edc7fd9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-617092,Uid:856b93b30da13b56956630be0aa3ea75,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1713229229064901678,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856b93b30da13b56956630be0aa3ea75,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.225:8443,kubernetes.io/config.hash: 856b93b30da13b56956630be0aa3ea75,kubernetes.io/config.seen: 2024-04-16T01:00:28.595673543Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-coll
ector/interceptors.go:74" id=1f91371d-a657-48e9-a8fa-eec1f34efc5d name=/runtime.v1.RuntimeService/ListPodSandbox
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.673799688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9929e6ff-424e-4b95-847f-4d1f0865a330 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.673878540Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9929e6ff-424e-4b95-847f-4d1f0865a330 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.674913297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4572e9cdc29a024c0d665ea3122bbf4ee193ce62bec3d6fca7a84a3da8eea5d,PodSandboxId:ca0de572a57af289b5353e09c08fe81ebce442abf756b5868f360761282894ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229543272787433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a62c0f7-0b15-48f3-9c17-d5966d39fbd5,},Annotations:map[string]string{io.kubernetes.container.hash: f0fa9704,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74a3cc406377588df21cbb51972b33a94ceab9b1b709bf57ff2d56c7d603bc0,PodSandboxId:50a34e4f6bf31ec69fcc63e1c7df992f38dce685025056abd39be373db88db27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713229543173787577,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4rh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42041028-d085-4ec4-8213-da3af0d5290e,},Annotations:map[string]string{io.kubernetes.container.hash: 895cdccf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaab3d6a27de7dc9412e8a6f0abc5e141962215eedfcf7197ee943e73841af99,PodSandboxId:e018f904305989f1ff1d93597179b4da15f71f64549bbfeba8a039ff003c7256,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229542630118009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2q58l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b9d000-738b-4110-8757-17f76197285c,},Annotations:map[string]string{io.kubernetes.container.hash: 53bd1eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8ea56f2c871eedf837c4dfbcf307e3cfefa4bab9fffb2613a7f0bad633a078,PodSandboxId:dba64b21492824747924f5f579e60784f329ad8168e49a1cc96794b5714a926f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229542484974548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h8k4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b114848-1137-4215-a966-03db39e4de2
3,},Annotations:map[string]string{io.kubernetes.container.hash: 497ba5dc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f819e84c3f08a4c2272bb2247c65424f82f07babbba6f7fd68910737a5953cf,PodSandboxId:9843f30af44e8e3a5fa255d77f9a935203e21b0222351e47e9d585dc2502b2b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713229522703278815,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3908ca4b3abece521c999b86b56464ea,},Annotations:map[string]string{io.kubernetes.container.hash: 696a4a67,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6569527c88ca91f8d9ea6edee36c9ea96e78bc7d69793f16c09d414891161c2d,PodSandboxId:545a1d31fa95190f89c49c1ae83662055277e377003dad0deee6494cb4f5197b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713229522682071161,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5fd98642218f1bf3a202b613a4a213c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51303ba689e3975c0dd24bfae353857af5481682a7b169f7619c6551935453c9,PodSandboxId:33f94eee1c600853c87144dfa6c4bc116904080ef784c545280202335d9b5391,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713229522606096489,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01546522de12db89593554cc2fff4a64,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e66ec3e722b10292730106f83fdc1422f913525d80890e068ee2fcb28cb206,PodSandboxId:4fe0b00952f395f31444782dd4149d96c803fb591738f75e76802e150bf0acd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713229522577254165,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856b93b30da13b56956630be0aa3ea75,},Annotations:map[string]string{io.kubernetes.container.hash: e92b43ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2cd3bd95b7310f36c549766f75579e6906e2f74ed2283dadddd6e622ac6952,PodSandboxId:2f0e6d5deddfe9330efaaeb1ffd8fcdd0691c1c1cc1ff3bc7f4501fb6edc7fd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713229229301969967,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856b93b30da13b56956630be0aa3ea75,},Annotations:map[string]string{io.kubernetes.container.hash: e92b43ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9929e6ff-424e-4b95-847f-4d1f0865a330 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.706291327Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d7c9c52-bd2c-43aa-8d5a-886ab2873dbd name=/runtime.v1.RuntimeService/Version
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.706395833Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d7c9c52-bd2c-43aa-8d5a-886ab2873dbd name=/runtime.v1.RuntimeService/Version
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.710904566Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=481c8c71-9ce4-44ea-a2d8-ccf6218d66fb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.711587536Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230630711552492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=481c8c71-9ce4-44ea-a2d8-ccf6218d66fb name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.713383564Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e54ca8b1-ecc9-4888-9eb9-31aba6fc914b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.713467245Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e54ca8b1-ecc9-4888-9eb9-31aba6fc914b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.714587584Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4572e9cdc29a024c0d665ea3122bbf4ee193ce62bec3d6fca7a84a3da8eea5d,PodSandboxId:ca0de572a57af289b5353e09c08fe81ebce442abf756b5868f360761282894ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229543272787433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a62c0f7-0b15-48f3-9c17-d5966d39fbd5,},Annotations:map[string]string{io.kubernetes.container.hash: f0fa9704,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74a3cc406377588df21cbb51972b33a94ceab9b1b709bf57ff2d56c7d603bc0,PodSandboxId:50a34e4f6bf31ec69fcc63e1c7df992f38dce685025056abd39be373db88db27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713229543173787577,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4rh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42041028-d085-4ec4-8213-da3af0d5290e,},Annotations:map[string]string{io.kubernetes.container.hash: 895cdccf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaab3d6a27de7dc9412e8a6f0abc5e141962215eedfcf7197ee943e73841af99,PodSandboxId:e018f904305989f1ff1d93597179b4da15f71f64549bbfeba8a039ff003c7256,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229542630118009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2q58l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b9d000-738b-4110-8757-17f76197285c,},Annotations:map[string]string{io.kubernetes.container.hash: 53bd1eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8ea56f2c871eedf837c4dfbcf307e3cfefa4bab9fffb2613a7f0bad633a078,PodSandboxId:dba64b21492824747924f5f579e60784f329ad8168e49a1cc96794b5714a926f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229542484974548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h8k4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b114848-1137-4215-a966-03db39e4de2
3,},Annotations:map[string]string{io.kubernetes.container.hash: 497ba5dc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f819e84c3f08a4c2272bb2247c65424f82f07babbba6f7fd68910737a5953cf,PodSandboxId:9843f30af44e8e3a5fa255d77f9a935203e21b0222351e47e9d585dc2502b2b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713229522703278815,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3908ca4b3abece521c999b86b56464ea,},Annotations:map[string]string{io.kubernetes.container.hash: 696a4a67,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6569527c88ca91f8d9ea6edee36c9ea96e78bc7d69793f16c09d414891161c2d,PodSandboxId:545a1d31fa95190f89c49c1ae83662055277e377003dad0deee6494cb4f5197b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713229522682071161,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5fd98642218f1bf3a202b613a4a213c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51303ba689e3975c0dd24bfae353857af5481682a7b169f7619c6551935453c9,PodSandboxId:33f94eee1c600853c87144dfa6c4bc116904080ef784c545280202335d9b5391,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713229522606096489,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01546522de12db89593554cc2fff4a64,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e66ec3e722b10292730106f83fdc1422f913525d80890e068ee2fcb28cb206,PodSandboxId:4fe0b00952f395f31444782dd4149d96c803fb591738f75e76802e150bf0acd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713229522577254165,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856b93b30da13b56956630be0aa3ea75,},Annotations:map[string]string{io.kubernetes.container.hash: e92b43ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2cd3bd95b7310f36c549766f75579e6906e2f74ed2283dadddd6e622ac6952,PodSandboxId:2f0e6d5deddfe9330efaaeb1ffd8fcdd0691c1c1cc1ff3bc7f4501fb6edc7fd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713229229301969967,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856b93b30da13b56956630be0aa3ea75,},Annotations:map[string]string{io.kubernetes.container.hash: e92b43ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e54ca8b1-ecc9-4888-9eb9-31aba6fc914b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.764057479Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7f38c47c-d33c-46d0-8819-1c80b2aaecdc name=/runtime.v1.RuntimeService/Version
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.764249586Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7f38c47c-d33c-46d0-8819-1c80b2aaecdc name=/runtime.v1.RuntimeService/Version
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.766463039Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04365952-4f6c-4e55-b00d-31be852e41b5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.767053940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230630767021334,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04365952-4f6c-4e55-b00d-31be852e41b5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.768299202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26e5de18-8bbf-4214-b90f-cff7f2196b09 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.768380435Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26e5de18-8bbf-4214-b90f-cff7f2196b09 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:23:50 embed-certs-617092 crio[729]: time="2024-04-16 01:23:50.768972002Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e4572e9cdc29a024c0d665ea3122bbf4ee193ce62bec3d6fca7a84a3da8eea5d,PodSandboxId:ca0de572a57af289b5353e09c08fe81ebce442abf756b5868f360761282894ff,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229543272787433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a62c0f7-0b15-48f3-9c17-d5966d39fbd5,},Annotations:map[string]string{io.kubernetes.container.hash: f0fa9704,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f74a3cc406377588df21cbb51972b33a94ceab9b1b709bf57ff2d56c7d603bc0,PodSandboxId:50a34e4f6bf31ec69fcc63e1c7df992f38dce685025056abd39be373db88db27,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING,CreatedAt:1713229543173787577,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p4rh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42041028-d085-4ec4-8213-da3af0d5290e,},Annotations:map[string]string{io.kubernetes.container.hash: 895cdccf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /d
ev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaab3d6a27de7dc9412e8a6f0abc5e141962215eedfcf7197ee943e73841af99,PodSandboxId:e018f904305989f1ff1d93597179b4da15f71f64549bbfeba8a039ff003c7256,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229542630118009,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-2q58l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9b9d000-738b-4110-8757-17f76197285c,},Annotations:map[string]string{io.kubernetes.container.hash: 53bd1eb,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"conta
inerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff8ea56f2c871eedf837c4dfbcf307e3cfefa4bab9fffb2613a7f0bad633a078,PodSandboxId:dba64b21492824747924f5f579e60784f329ad8168e49a1cc96794b5714a926f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229542484974548,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-h8k4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b114848-1137-4215-a966-03db39e4de2
3,},Annotations:map[string]string{io.kubernetes.container.hash: 497ba5dc,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f819e84c3f08a4c2272bb2247c65424f82f07babbba6f7fd68910737a5953cf,PodSandboxId:9843f30af44e8e3a5fa255d77f9a935203e21b0222351e47e9d585dc2502b2b8,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1713229522703278815,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3908ca4b3abece521c999b86b56464ea,},Annotations:map[string]string{io.kubernetes.container.hash: 696a4a67,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6569527c88ca91f8d9ea6edee36c9ea96e78bc7d69793f16c09d414891161c2d,PodSandboxId:545a1d31fa95190f89c49c1ae83662055277e377003dad0deee6494cb4f5197b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:1713229522682071161,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5fd98642218f1bf3a202b613a4a213c,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51303ba689e3975c0dd24bfae353857af5481682a7b169f7619c6551935453c9,PodSandboxId:33f94eee1c600853c87144dfa6c4bc116904080ef784c545280202335d9b5391,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:1713229522606096489,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01546522de12db89593554cc2fff4a64,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4e66ec3e722b10292730106f83fdc1422f913525d80890e068ee2fcb28cb206,PodSandboxId:4fe0b00952f395f31444782dd4149d96c803fb591738f75e76802e150bf0acd4,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713229522577254165,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856b93b30da13b56956630be0aa3ea75,},Annotations:map[string]string{io.kubernetes.container.hash: e92b43ac,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b2cd3bd95b7310f36c549766f75579e6906e2f74ed2283dadddd6e622ac6952,PodSandboxId:2f0e6d5deddfe9330efaaeb1ffd8fcdd0691c1c1cc1ff3bc7f4501fb6edc7fd9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713229229301969967,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-617092,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 856b93b30da13b56956630be0aa3ea75,},Annotations:map[string]string{io.kubernetes.container.hash: e92b43ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26e5de18-8bbf-4214-b90f-cff7f2196b09 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e4572e9cdc29a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   18 minutes ago      Running             storage-provisioner       0                   ca0de572a57af       storage-provisioner
	f74a3cc406377       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   18 minutes ago      Running             kube-proxy                0                   50a34e4f6bf31       kube-proxy-p4rh9
	aaab3d6a27de7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago      Running             coredns                   0                   e018f90430598       coredns-76f75df574-2q58l
	ff8ea56f2c871       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   18 minutes ago      Running             coredns                   0                   dba64b2149282       coredns-76f75df574-h8k4k
	1f819e84c3f08       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   18 minutes ago      Running             etcd                      2                   9843f30af44e8       etcd-embed-certs-617092
	6569527c88ca9       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   18 minutes ago      Running             kube-controller-manager   2                   545a1d31fa951       kube-controller-manager-embed-certs-617092
	51303ba689e39       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   18 minutes ago      Running             kube-scheduler            2                   33f94eee1c600       kube-scheduler-embed-certs-617092
	e4e66ec3e722b       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   18 minutes ago      Running             kube-apiserver            2                   4fe0b00952f39       kube-apiserver-embed-certs-617092
	2b2cd3bd95b73       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   23 minutes ago      Exited              kube-apiserver            1                   2f0e6d5deddfe       kube-apiserver-embed-certs-617092
	
	
	==> coredns [aaab3d6a27de7dc9412e8a6f0abc5e141962215eedfcf7197ee943e73841af99] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ff8ea56f2c871eedf837c4dfbcf307e3cfefa4bab9fffb2613a7f0bad633a078] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-617092
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-617092
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=embed-certs-617092
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T01_05_29_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 01:05:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-617092
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 01:23:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 01:21:08 +0000   Tue, 16 Apr 2024 01:05:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 01:21:08 +0000   Tue, 16 Apr 2024 01:05:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 01:21:08 +0000   Tue, 16 Apr 2024 01:05:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 01:21:08 +0000   Tue, 16 Apr 2024 01:05:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.225
	  Hostname:    embed-certs-617092
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef1adc5d828049d78e732d58bef9fedf
	  System UUID:                ef1adc5d-8280-49d7-8e73-2d58bef9fedf
	  Boot ID:                    98b33474-2495-4ce9-aa86-3f70705f2557
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-2q58l                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 coredns-76f75df574-h8k4k                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 etcd-embed-certs-617092                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-embed-certs-617092             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-embed-certs-617092    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-p4rh9                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-embed-certs-617092             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 metrics-server-57f55c9bc5-j5clp               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-617092 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-617092 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node embed-certs-617092 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node embed-certs-617092 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node embed-certs-617092 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m                kubelet          Node embed-certs-617092 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node embed-certs-617092 event: Registered Node embed-certs-617092 in Controller
	
	
	==> dmesg <==
	[  +0.052884] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041433] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.066493] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.930995] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.677486] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.118314] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[  +0.058739] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077448] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +0.200962] systemd-fstab-generator[670]: Ignoring "noauto" option for root device
	[  +0.172170] systemd-fstab-generator[683]: Ignoring "noauto" option for root device
	[  +0.348318] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +4.822279] systemd-fstab-generator[812]: Ignoring "noauto" option for root device
	[  +0.067330] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.189500] systemd-fstab-generator[936]: Ignoring "noauto" option for root device
	[  +5.638020] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.687182] kauditd_printk_skb: 79 callbacks suppressed
	[Apr16 01:05] kauditd_printk_skb: 3 callbacks suppressed
	[  +2.258288] systemd-fstab-generator[3577]: Ignoring "noauto" option for root device
	[  +4.541284] kauditd_printk_skb: 58 callbacks suppressed
	[  +2.747678] systemd-fstab-generator[3895]: Ignoring "noauto" option for root device
	[ +12.513471] systemd-fstab-generator[4099]: Ignoring "noauto" option for root device
	[  +0.115491] kauditd_printk_skb: 14 callbacks suppressed
	[Apr16 01:06] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [1f819e84c3f08a4c2272bb2247c65424f82f07babbba6f7fd68910737a5953cf] <==
	{"level":"info","ts":"2024-04-16T01:05:23.874436Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"6b1b17dfe1a8c5a3","local-member-attributes":"{Name:embed-certs-617092 ClientURLs:[https://192.168.61.225:2379]}","request-path":"/0/members/6b1b17dfe1a8c5a3/attributes","cluster-id":"76f03a987549979","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T01:05:23.874655Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T01:05:23.874738Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T01:05:23.875225Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:05:23.879786Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T01:05:23.889239Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T01:05:23.889312Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T01:05:23.891589Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.225:2379"}
	{"level":"info","ts":"2024-04-16T01:05:23.891727Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"76f03a987549979","local-member-id":"6b1b17dfe1a8c5a3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:05:23.891811Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:05:23.891856Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:15:23.942607Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":720}
	{"level":"info","ts":"2024-04-16T01:15:23.952023Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":720,"took":"9.070445ms","hash":4028835467,"current-db-size-bytes":2437120,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2437120,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-04-16T01:15:23.952086Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4028835467,"revision":720,"compact-revision":-1}
	{"level":"info","ts":"2024-04-16T01:20:23.949755Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":963}
	{"level":"info","ts":"2024-04-16T01:20:23.953537Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":963,"took":"3.442772ms","hash":2030205047,"current-db-size-bytes":2437120,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1626112,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-04-16T01:20:23.953589Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2030205047,"revision":963,"compact-revision":720}
	{"level":"warn","ts":"2024-04-16T01:20:52.647848Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.997093ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14241383558456866462 > lease_revoke:<id:45a38ee46fe8f64e>","response":"size:27"}
	{"level":"info","ts":"2024-04-16T01:21:38.778286Z","caller":"traceutil/trace.go:171","msg":"trace[2088933054] transaction","detail":"{read_only:false; response_revision:1269; number_of_response:1; }","duration":"158.789369ms","start":"2024-04-16T01:21:38.619459Z","end":"2024-04-16T01:21:38.778249Z","steps":["trace[2088933054] 'process raft request'  (duration: 158.594476ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T01:22:52.731254Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.416278ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14241383558456867054 > lease_revoke:<id:45a38ee46fe8f8a1>","response":"size:27"}
	{"level":"warn","ts":"2024-04-16T01:23:18.327632Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.133792ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-04-16T01:23:18.327841Z","caller":"traceutil/trace.go:171","msg":"trace[1085999263] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1350; }","duration":"148.364807ms","start":"2024-04-16T01:23:18.179385Z","end":"2024-04-16T01:23:18.32775Z","steps":["trace[1085999263] 'range keys from in-memory index tree'  (duration: 148.069863ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T01:23:24.064102Z","caller":"traceutil/trace.go:171","msg":"trace[639236104] transaction","detail":"{read_only:false; response_revision:1354; number_of_response:1; }","duration":"111.162008ms","start":"2024-04-16T01:23:23.952922Z","end":"2024-04-16T01:23:24.064084Z","steps":["trace[639236104] 'process raft request'  (duration: 110.96243ms)"],"step_count":1}
	{"level":"info","ts":"2024-04-16T01:23:44.537375Z","caller":"traceutil/trace.go:171","msg":"trace[1894366084] transaction","detail":"{read_only:false; response_revision:1370; number_of_response:1; }","duration":"357.212021ms","start":"2024-04-16T01:23:44.180129Z","end":"2024-04-16T01:23:44.537341Z","steps":["trace[1894366084] 'process raft request'  (duration: 356.867282ms)"],"step_count":1}
	{"level":"warn","ts":"2024-04-16T01:23:44.538351Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-04-16T01:23:44.180111Z","time spent":"357.375905ms","remote":"127.0.0.1:32926","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1369 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	
	
	==> kernel <==
	 01:23:51 up 23 min,  0 users,  load average: 0.36, 0.20, 0.18
	Linux embed-certs-617092 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2b2cd3bd95b7310f36c549766f75579e6906e2f74ed2283dadddd6e622ac6952] <==
	W0416 01:05:15.937312       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:15.999583       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.068665       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.122978       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.250774       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.269848       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.330785       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.349783       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.389888       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.647491       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.658796       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.688720       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.699862       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.753420       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.796622       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.837439       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.919225       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:16.941405       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:17.069002       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:17.120369       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:17.319054       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:17.398728       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:17.477648       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:17.545024       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:17.599880       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e4e66ec3e722b10292730106f83fdc1422f913525d80890e068ee2fcb28cb206] <==
	I0416 01:18:26.379755       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:20:25.380631       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:20:25.380738       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0416 01:20:26.381866       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:20:26.381915       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 01:20:26.381924       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:20:26.381984       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:20:26.382196       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 01:20:26.383117       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:21:26.382234       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:21:26.382449       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 01:21:26.382483       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:21:26.383571       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:21:26.383661       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 01:21:26.383693       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:23:26.382726       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:23:26.382836       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 01:23:26.382853       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:23:26.384240       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:23:26.384363       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 01:23:26.384405       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [6569527c88ca91f8d9ea6edee36c9ea96e78bc7d69793f16c09d414891161c2d] <==
	I0416 01:18:11.201997       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:18:40.718389       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:18:41.212683       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:19:10.724744       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:19:11.221451       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:19:40.732284       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:19:41.238561       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:20:10.738537       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:20:11.247205       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:20:40.744893       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:20:41.263075       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:21:10.753626       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:21:11.279753       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:21:40.760398       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:21:41.288178       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0416 01:21:43.257041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="248.745µs"
	I0416 01:21:55.249670       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="324.914µs"
	E0416 01:22:10.772395       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:22:11.302420       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:22:40.777401       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:22:41.317016       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:23:10.783261       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:23:11.327702       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:23:40.790048       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:23:41.341226       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [f74a3cc406377588df21cbb51972b33a94ceab9b1b709bf57ff2d56c7d603bc0] <==
	I0416 01:05:43.609787       1 server_others.go:72] "Using iptables proxy"
	I0416 01:05:43.645836       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.225"]
	I0416 01:05:43.748312       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 01:05:43.748463       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 01:05:43.748524       1 server_others.go:168] "Using iptables Proxier"
	I0416 01:05:43.752709       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 01:05:43.753599       1 server.go:865] "Version info" version="v1.29.3"
	I0416 01:05:43.753636       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 01:05:43.755490       1 config.go:188] "Starting service config controller"
	I0416 01:05:43.755641       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 01:05:43.766371       1 shared_informer.go:318] Caches are synced for service config
	I0416 01:05:43.756939       1 config.go:97] "Starting endpoint slice config controller"
	I0416 01:05:43.766616       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 01:05:43.766756       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 01:05:43.762073       1 config.go:315] "Starting node config controller"
	I0416 01:05:43.768491       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 01:05:43.768530       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [51303ba689e3975c0dd24bfae353857af5481682a7b169f7619c6551935453c9] <==
	E0416 01:05:25.422545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 01:05:25.422563       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 01:05:25.422553       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 01:05:25.422576       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 01:05:26.236915       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 01:05:26.237010       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 01:05:26.311996       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0416 01:05:26.312069       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0416 01:05:26.378396       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 01:05:26.378501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 01:05:26.498315       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0416 01:05:26.498364       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0416 01:05:26.570931       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 01:05:26.571020       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 01:05:26.585442       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 01:05:26.585603       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 01:05:26.606410       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 01:05:26.606506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 01:05:26.648422       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 01:05:26.648472       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 01:05:26.727454       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0416 01:05:26.727505       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0416 01:05:26.890105       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 01:05:26.890217       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0416 01:05:29.407422       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 01:21:29 embed-certs-617092 kubelet[3902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 01:21:30 embed-certs-617092 kubelet[3902]: E0416 01:21:30.250887    3902 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 16 01:21:30 embed-certs-617092 kubelet[3902]: E0416 01:21:30.251017    3902 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Apr 16 01:21:30 embed-certs-617092 kubelet[3902]: E0416 01:21:30.251393    3902 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zc89d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-j5clp_kube-system(99808b2d-344f-43b7-a29c-01f0a2026aa8): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Apr 16 01:21:30 embed-certs-617092 kubelet[3902]: E0416 01:21:30.251543    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	Apr 16 01:21:43 embed-certs-617092 kubelet[3902]: E0416 01:21:43.231621    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	Apr 16 01:21:55 embed-certs-617092 kubelet[3902]: E0416 01:21:55.236394    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	Apr 16 01:22:09 embed-certs-617092 kubelet[3902]: E0416 01:22:09.231669    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	Apr 16 01:22:23 embed-certs-617092 kubelet[3902]: E0416 01:22:23.231375    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	Apr 16 01:22:29 embed-certs-617092 kubelet[3902]: E0416 01:22:29.319096    3902 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 01:22:29 embed-certs-617092 kubelet[3902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 01:22:29 embed-certs-617092 kubelet[3902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 01:22:29 embed-certs-617092 kubelet[3902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 01:22:29 embed-certs-617092 kubelet[3902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 01:22:38 embed-certs-617092 kubelet[3902]: E0416 01:22:38.231545    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	Apr 16 01:22:52 embed-certs-617092 kubelet[3902]: E0416 01:22:52.231089    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	Apr 16 01:23:05 embed-certs-617092 kubelet[3902]: E0416 01:23:05.230972    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	Apr 16 01:23:19 embed-certs-617092 kubelet[3902]: E0416 01:23:19.232453    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	Apr 16 01:23:29 embed-certs-617092 kubelet[3902]: E0416 01:23:29.312707    3902 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 01:23:29 embed-certs-617092 kubelet[3902]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 01:23:29 embed-certs-617092 kubelet[3902]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 01:23:29 embed-certs-617092 kubelet[3902]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 01:23:29 embed-certs-617092 kubelet[3902]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 01:23:30 embed-certs-617092 kubelet[3902]: E0416 01:23:30.230706    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	Apr 16 01:23:41 embed-certs-617092 kubelet[3902]: E0416 01:23:41.233877    3902 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5clp" podUID="99808b2d-344f-43b7-a29c-01f0a2026aa8"
	
	
	==> storage-provisioner [e4572e9cdc29a024c0d665ea3122bbf4ee193ce62bec3d6fca7a84a3da8eea5d] <==
	I0416 01:05:43.595643       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 01:05:43.675057       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 01:05:43.675224       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 01:05:43.703751       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 01:05:43.703933       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-617092_838c8a3d-3a26-4e1f-8eb9-2f38bc028b85!
	I0416 01:05:43.704893       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1f09d4c6-528f-4f5f-9f22-4bfa77107c5d", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-617092_838c8a3d-3a26-4e1f-8eb9-2f38bc028b85 became leader
	I0416 01:05:43.804093       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-617092_838c8a3d-3a26-4e1f-8eb9-2f38bc028b85!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-617092 -n embed-certs-617092
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-617092 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-j5clp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-617092 describe pod metrics-server-57f55c9bc5-j5clp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-617092 describe pod metrics-server-57f55c9bc5-j5clp: exit status 1 (96.04117ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-j5clp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-617092 describe pod metrics-server-57f55c9bc5-j5clp: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (543.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (328.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-653942 -n default-k8s-diff-port-653942
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-04-16 01:20:38.394044255 +0000 UTC m=+6180.910805567
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-653942 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-653942 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.547µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-653942 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653942 -n default-k8s-diff-port-653942
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-653942 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-653942 logs -n 25: (1.227715012s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| addons  | enable dashboard -p newest-cni-012509                  | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-012509 --memory=2200 --alsologtostderr   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:54 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| image   | newest-cni-012509 image list                           | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --format=json                                          |                              |         |                |                     |                     |
	| pause   | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	| delete  | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	| delete  | -p                                                     | disable-driver-mounts-988802 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | disable-driver-mounts-988802                           |                              |         |                |                     |                     |
	| start   | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653942       | default-k8s-diff-port-653942 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-572602                  | no-preload-572602            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653942 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 01:06 UTC |
	|         | default-k8s-diff-port-653942                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-800769        | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| start   | -p no-preload-572602                                   | no-preload-572602            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 01:05 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-617092            | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-800769                              | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-800769             | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-800769                              | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-617092                 | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:58 UTC | 16 Apr 24 01:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| delete  | -p old-k8s-version-800769                              | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:20 UTC | 16 Apr 24 01:20 UTC |
	| start   | -p auto-381983 --memory=3072                           | auto-381983                  | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:20 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	| delete  | -p no-preload-572602                                   | no-preload-572602            | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:20 UTC | 16 Apr 24 01:20 UTC |
	| start   | -p kindnet-381983                                      | kindnet-381983               | jenkins | v1.33.0-beta.0 | 16 Apr 24 01:20 UTC |                     |
	|         | --memory=3072                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |                |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 01:20:35
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 01:20:35.211390   69462 out.go:291] Setting OutFile to fd 1 ...
	I0416 01:20:35.211842   69462 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 01:20:35.211903   69462 out.go:304] Setting ErrFile to fd 2...
	I0416 01:20:35.211924   69462 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 01:20:35.212368   69462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 01:20:35.213555   69462 out.go:298] Setting JSON to false
	I0416 01:20:35.214515   69462 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":7379,"bootTime":1713223056,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 01:20:35.214577   69462 start.go:139] virtualization: kvm guest
	I0416 01:20:35.216453   69462 out.go:177] * [kindnet-381983] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 01:20:35.218201   69462 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 01:20:35.219646   69462 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 01:20:35.218274   69462 notify.go:220] Checking for updates...
	I0416 01:20:35.222388   69462 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:20:35.223963   69462 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 01:20:35.225416   69462 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 01:20:35.226784   69462 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 01:20:35.228838   69462 config.go:182] Loaded profile config "auto-381983": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:20:35.228988   69462 config.go:182] Loaded profile config "default-k8s-diff-port-653942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:20:35.229112   69462 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:20:35.229296   69462 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 01:20:35.264334   69462 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 01:20:35.265813   69462 start.go:297] selected driver: kvm2
	I0416 01:20:35.265828   69462 start.go:901] validating driver "kvm2" against <nil>
	I0416 01:20:35.265841   69462 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 01:20:35.266765   69462 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 01:20:35.266845   69462 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 01:20:35.283014   69462 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 01:20:35.283087   69462 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0416 01:20:35.283377   69462 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:20:35.283458   69462 cni.go:84] Creating CNI manager for "kindnet"
	I0416 01:20:35.283471   69462 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0416 01:20:35.283559   69462 start.go:340] cluster config:
	{Name:kindnet-381983 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kindnet-381983 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:20:35.283731   69462 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 01:20:35.285776   69462 out.go:177] * Starting "kindnet-381983" primary control-plane node in "kindnet-381983" cluster
	
	
	==> CRI-O <==
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.038578284Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230439038555318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=adff05b0-2c5a-4ad4-9b69-20c74a090a26 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.039260305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ab732db-3d3d-415d-9bb7-532b0bf51b1d name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.039316612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ab732db-3d3d-415d-9bb7-532b0bf51b1d name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.039513787Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4ddd594d7334d0604fd41c36bda70b56c09e95f393c080374978e5783c53f6d,PodSandboxId:43d8a636bc0f240f869482ade4f50c4f032894ad853abe8d6d28ebaea502c41b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229565603417369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d131c1fc-9124-4b46-a16f-a8fb5029a57b,},Annotations:map[string]string{io.kubernetes.container.hash: 27457e9a,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c41790569cbe9a105aa5b3e904bdf461b0c6b1b9e64053f82f73cdff95cea28,PodSandboxId:61a7932db1c4e83f4cf320bbce30df3b5a9b3ab4ac956460df9324b30e32f5f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229564042917327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zpnhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 990672b6-bb3a-4f91-8de7-7c2ec224c94a,},Annotations:map[string]string{io.kubernetes.container.hash: ad74a928,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b1e8894217a61058b0ac838ea292710915b108ad0417eb72b6302ddaf9e3d2,PodSandboxId:3d5fd5064cf9db355f19293b3000fc3efc42b727a4a120cc50a3d1fa129e96a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229564127266379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5nnpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3350aca5-639e-44a1-bd84-d1e4b6486143,},Annotations:map[string]string{io.kubernetes.container.hash: 790c3ff0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffc152b91a92e5cdc34e606f40d065686234148d7797e82423044fa41c461cd,PodSandboxId:c56c92061323a12b7649ec6dd0fb3d46f1f73c566d613aae3feec63a83f4aae8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1713229563406353932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mg5km,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74764194-1f31-40b1-90b5-497e248ab7da,},Annotations:map[string]string{io.kubernetes.container.hash: 37bec7a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:790f3485688cb7e8ce2b6eaa4acfac3f546e15f3b3d3011fb7d3babb7e28d508,PodSandboxId:9b015167635ab9ad101c5aa226284ddd0122e478292139bc942af14687c8491e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:171322954370919875
8,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f789bc57c1f4c290ab8fd275d2010d6a,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73dd87507a5ddd51947b5702eca37ec8a3c1d395ce443b1ca82fbd1d955329e6,PodSandboxId:eb65098ef8e126b3bbf9beff1cb98e3d528f467ab1c0568770988d0414c6ff79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:17132295436
97138964,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee61dbfd28bab5575146238429925f,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cb8787026c8eadc21223e448145ec1f2032be0623aa0e3d20d4a5680f0d26fb,PodSandboxId:177c67d63aab2abb2c6c4a6962a793f26a17c920f511526d529265155cb89ce4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17132
29543671489451,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 027e6feb7cb85911f362954ce5f74701,},Annotations:map[string]string{io.kubernetes.container.hash: 41d79b00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e135f634e26f7cb1cb96c36197c8120329ba60121e04fcf857d389b138d5879,PodSandboxId:697d63ff93426c3e8636123ffb9438aa3699fb072516fb61f83171ed69528a5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713229543615136997,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a0243852318f4be1b779e458bfa57d,},Annotations:map[string]string{io.kubernetes.container.hash: febc2576,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ccaef892bf194c7025d3249065ddab09c8dd78f7be86a4b7b0aa63921817f7,PodSandboxId:a3ce46d6060154a9c776b7a99b0ee19bb2a188d3c754c8c762fe719488763730,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713229247881428557,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a0243852318f4be1b779e458bfa57d,},Annotations:map[string]string{io.kubernetes.container.hash: febc2576,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ab732db-3d3d-415d-9bb7-532b0bf51b1d name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.077487335Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9b015dd-3a41-4532-8a36-e9ed434147de name=/runtime.v1.RuntimeService/Version
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.077557611Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9b015dd-3a41-4532-8a36-e9ed434147de name=/runtime.v1.RuntimeService/Version
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.078927963Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d304aedb-07a6-42d5-ba95-a1a14270dd2f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.079303741Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230439079283260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d304aedb-07a6-42d5-ba95-a1a14270dd2f name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.079664733Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2a518e4-6552-43f4-ad3a-d59fc44d249c name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.079867878Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2a518e4-6552-43f4-ad3a-d59fc44d249c name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.080082692Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4ddd594d7334d0604fd41c36bda70b56c09e95f393c080374978e5783c53f6d,PodSandboxId:43d8a636bc0f240f869482ade4f50c4f032894ad853abe8d6d28ebaea502c41b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229565603417369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d131c1fc-9124-4b46-a16f-a8fb5029a57b,},Annotations:map[string]string{io.kubernetes.container.hash: 27457e9a,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c41790569cbe9a105aa5b3e904bdf461b0c6b1b9e64053f82f73cdff95cea28,PodSandboxId:61a7932db1c4e83f4cf320bbce30df3b5a9b3ab4ac956460df9324b30e32f5f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229564042917327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zpnhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 990672b6-bb3a-4f91-8de7-7c2ec224c94a,},Annotations:map[string]string{io.kubernetes.container.hash: ad74a928,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b1e8894217a61058b0ac838ea292710915b108ad0417eb72b6302ddaf9e3d2,PodSandboxId:3d5fd5064cf9db355f19293b3000fc3efc42b727a4a120cc50a3d1fa129e96a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229564127266379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5nnpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3350aca5-639e-44a1-bd84-d1e4b6486143,},Annotations:map[string]string{io.kubernetes.container.hash: 790c3ff0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffc152b91a92e5cdc34e606f40d065686234148d7797e82423044fa41c461cd,PodSandboxId:c56c92061323a12b7649ec6dd0fb3d46f1f73c566d613aae3feec63a83f4aae8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1713229563406353932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mg5km,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74764194-1f31-40b1-90b5-497e248ab7da,},Annotations:map[string]string{io.kubernetes.container.hash: 37bec7a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:790f3485688cb7e8ce2b6eaa4acfac3f546e15f3b3d3011fb7d3babb7e28d508,PodSandboxId:9b015167635ab9ad101c5aa226284ddd0122e478292139bc942af14687c8491e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:171322954370919875
8,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f789bc57c1f4c290ab8fd275d2010d6a,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73dd87507a5ddd51947b5702eca37ec8a3c1d395ce443b1ca82fbd1d955329e6,PodSandboxId:eb65098ef8e126b3bbf9beff1cb98e3d528f467ab1c0568770988d0414c6ff79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:17132295436
97138964,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee61dbfd28bab5575146238429925f,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cb8787026c8eadc21223e448145ec1f2032be0623aa0e3d20d4a5680f0d26fb,PodSandboxId:177c67d63aab2abb2c6c4a6962a793f26a17c920f511526d529265155cb89ce4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17132
29543671489451,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 027e6feb7cb85911f362954ce5f74701,},Annotations:map[string]string{io.kubernetes.container.hash: 41d79b00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e135f634e26f7cb1cb96c36197c8120329ba60121e04fcf857d389b138d5879,PodSandboxId:697d63ff93426c3e8636123ffb9438aa3699fb072516fb61f83171ed69528a5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713229543615136997,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a0243852318f4be1b779e458bfa57d,},Annotations:map[string]string{io.kubernetes.container.hash: febc2576,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ccaef892bf194c7025d3249065ddab09c8dd78f7be86a4b7b0aa63921817f7,PodSandboxId:a3ce46d6060154a9c776b7a99b0ee19bb2a188d3c754c8c762fe719488763730,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713229247881428557,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a0243852318f4be1b779e458bfa57d,},Annotations:map[string]string{io.kubernetes.container.hash: febc2576,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2a518e4-6552-43f4-ad3a-d59fc44d249c name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.116650861Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f3d6ffaa-c3ff-4c89-af5e-6c882386de5f name=/runtime.v1.RuntimeService/Version
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.116769391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f3d6ffaa-c3ff-4c89-af5e-6c882386de5f name=/runtime.v1.RuntimeService/Version
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.118260978Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fc2edf7b-6391-4d90-a441-fb0a63174989 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.118659359Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230439118638706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc2edf7b-6391-4d90-a441-fb0a63174989 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.119416366Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f89a15db-3bff-45f8-b8a4-3275362bf34b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.119523172Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f89a15db-3bff-45f8-b8a4-3275362bf34b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.119873127Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4ddd594d7334d0604fd41c36bda70b56c09e95f393c080374978e5783c53f6d,PodSandboxId:43d8a636bc0f240f869482ade4f50c4f032894ad853abe8d6d28ebaea502c41b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229565603417369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d131c1fc-9124-4b46-a16f-a8fb5029a57b,},Annotations:map[string]string{io.kubernetes.container.hash: 27457e9a,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c41790569cbe9a105aa5b3e904bdf461b0c6b1b9e64053f82f73cdff95cea28,PodSandboxId:61a7932db1c4e83f4cf320bbce30df3b5a9b3ab4ac956460df9324b30e32f5f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229564042917327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zpnhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 990672b6-bb3a-4f91-8de7-7c2ec224c94a,},Annotations:map[string]string{io.kubernetes.container.hash: ad74a928,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b1e8894217a61058b0ac838ea292710915b108ad0417eb72b6302ddaf9e3d2,PodSandboxId:3d5fd5064cf9db355f19293b3000fc3efc42b727a4a120cc50a3d1fa129e96a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229564127266379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5nnpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3350aca5-639e-44a1-bd84-d1e4b6486143,},Annotations:map[string]string{io.kubernetes.container.hash: 790c3ff0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffc152b91a92e5cdc34e606f40d065686234148d7797e82423044fa41c461cd,PodSandboxId:c56c92061323a12b7649ec6dd0fb3d46f1f73c566d613aae3feec63a83f4aae8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1713229563406353932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mg5km,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74764194-1f31-40b1-90b5-497e248ab7da,},Annotations:map[string]string{io.kubernetes.container.hash: 37bec7a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:790f3485688cb7e8ce2b6eaa4acfac3f546e15f3b3d3011fb7d3babb7e28d508,PodSandboxId:9b015167635ab9ad101c5aa226284ddd0122e478292139bc942af14687c8491e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:171322954370919875
8,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f789bc57c1f4c290ab8fd275d2010d6a,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73dd87507a5ddd51947b5702eca37ec8a3c1d395ce443b1ca82fbd1d955329e6,PodSandboxId:eb65098ef8e126b3bbf9beff1cb98e3d528f467ab1c0568770988d0414c6ff79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:17132295436
97138964,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee61dbfd28bab5575146238429925f,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cb8787026c8eadc21223e448145ec1f2032be0623aa0e3d20d4a5680f0d26fb,PodSandboxId:177c67d63aab2abb2c6c4a6962a793f26a17c920f511526d529265155cb89ce4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17132
29543671489451,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 027e6feb7cb85911f362954ce5f74701,},Annotations:map[string]string{io.kubernetes.container.hash: 41d79b00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e135f634e26f7cb1cb96c36197c8120329ba60121e04fcf857d389b138d5879,PodSandboxId:697d63ff93426c3e8636123ffb9438aa3699fb072516fb61f83171ed69528a5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713229543615136997,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a0243852318f4be1b779e458bfa57d,},Annotations:map[string]string{io.kubernetes.container.hash: febc2576,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ccaef892bf194c7025d3249065ddab09c8dd78f7be86a4b7b0aa63921817f7,PodSandboxId:a3ce46d6060154a9c776b7a99b0ee19bb2a188d3c754c8c762fe719488763730,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713229247881428557,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a0243852318f4be1b779e458bfa57d,},Annotations:map[string]string{io.kubernetes.container.hash: febc2576,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f89a15db-3bff-45f8-b8a4-3275362bf34b name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.155522119Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=924aa895-c58d-46de-a7c9-bfb6729022c6 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.155624702Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=924aa895-c58d-46de-a7c9-bfb6729022c6 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.157143299Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=77446816-c3b1-4564-9f86-52cbadd48fe5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.157654195Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230439157632043,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:130129,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=77446816-c3b1-4564-9f86-52cbadd48fe5 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.158120096Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8eb3736b-9672-4380-98ce-69f1ba719dd7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.158173321Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8eb3736b-9672-4380-98ce-69f1ba719dd7 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:39 default-k8s-diff-port-653942 crio[729]: time="2024-04-16 01:20:39.158366451Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c4ddd594d7334d0604fd41c36bda70b56c09e95f393c080374978e5783c53f6d,PodSandboxId:43d8a636bc0f240f869482ade4f50c4f032894ad853abe8d6d28ebaea502c41b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1713229565603417369,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d131c1fc-9124-4b46-a16f-a8fb5029a57b,},Annotations:map[string]string{io.kubernetes.container.hash: 27457e9a,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c41790569cbe9a105aa5b3e904bdf461b0c6b1b9e64053f82f73cdff95cea28,PodSandboxId:61a7932db1c4e83f4cf320bbce30df3b5a9b3ab4ac956460df9324b30e32f5f1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229564042917327,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-zpnhs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 990672b6-bb3a-4f91-8de7-7c2ec224c94a,},Annotations:map[string]string{io.kubernetes.container.hash: ad74a928,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5b1e8894217a61058b0ac838ea292710915b108ad0417eb72b6302ddaf9e3d2,PodSandboxId:3d5fd5064cf9db355f19293b3000fc3efc42b727a4a120cc50a3d1fa129e96a5,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1713229564127266379,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5nnpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 3350aca5-639e-44a1-bd84-d1e4b6486143,},Annotations:map[string]string{io.kubernetes.container.hash: 790c3ff0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ffc152b91a92e5cdc34e606f40d065686234148d7797e82423044fa41c461cd,PodSandboxId:c56c92061323a12b7649ec6dd0fb3d46f1f73c566d613aae3feec63a83f4aae8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,State:CONTAINER_RUNNING
,CreatedAt:1713229563406353932,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mg5km,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74764194-1f31-40b1-90b5-497e248ab7da,},Annotations:map[string]string{io.kubernetes.container.hash: 37bec7a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:790f3485688cb7e8ce2b6eaa4acfac3f546e15f3b3d3011fb7d3babb7e28d508,PodSandboxId:9b015167635ab9ad101c5aa226284ddd0122e478292139bc942af14687c8491e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,State:CONTAINER_RUNNING,CreatedAt:171322954370919875
8,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f789bc57c1f4c290ab8fd275d2010d6a,},Annotations:map[string]string{io.kubernetes.container.hash: be150834,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73dd87507a5ddd51947b5702eca37ec8a3c1d395ce443b1ca82fbd1d955329e6,PodSandboxId:eb65098ef8e126b3bbf9beff1cb98e3d528f467ab1c0568770988d0414c6ff79,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,State:CONTAINER_RUNNING,CreatedAt:17132295436
97138964,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdee61dbfd28bab5575146238429925f,},Annotations:map[string]string{io.kubernetes.container.hash: 2d2557ee,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6cb8787026c8eadc21223e448145ec1f2032be0623aa0e3d20d4a5680f0d26fb,PodSandboxId:177c67d63aab2abb2c6c4a6962a793f26a17c920f511526d529265155cb89ce4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:17132
29543671489451,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 027e6feb7cb85911f362954ce5f74701,},Annotations:map[string]string{io.kubernetes.container.hash: 41d79b00,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e135f634e26f7cb1cb96c36197c8120329ba60121e04fcf857d389b138d5879,PodSandboxId:697d63ff93426c3e8636123ffb9438aa3699fb072516fb61f83171ed69528a5c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_RUNNING,CreatedAt:1713229543615136997,Labels:
map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a0243852318f4be1b779e458bfa57d,},Annotations:map[string]string{io.kubernetes.container.hash: febc2576,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ccaef892bf194c7025d3249065ddab09c8dd78f7be86a4b7b0aa63921817f7,PodSandboxId:a3ce46d6060154a9c776b7a99b0ee19bb2a188d3c754c8c762fe719488763730,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,State:CONTAINER_EXITED,CreatedAt:1713229247881428557,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-653942,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a0243852318f4be1b779e458bfa57d,},Annotations:map[string]string{io.kubernetes.container.hash: febc2576,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8eb3736b-9672-4380-98ce-69f1ba719dd7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c4ddd594d7334       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   43d8a636bc0f2       storage-provisioner
	a5b1e8894217a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   3d5fd5064cf9d       coredns-76f75df574-5nnpv
	9c41790569cbe       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   61a7932db1c4e       coredns-76f75df574-zpnhs
	7ffc152b91a92       a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392   14 minutes ago      Running             kube-proxy                0                   c56c92061323a       kube-proxy-mg5km
	790f3485688cb       8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b   14 minutes ago      Running             kube-scheduler            2                   9b015167635ab       kube-scheduler-default-k8s-diff-port-653942
	73dd87507a5dd       6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3   14 minutes ago      Running             kube-controller-manager   2                   eb65098ef8e12       kube-controller-manager-default-k8s-diff-port-653942
	6cb8787026c8e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   14 minutes ago      Running             etcd                      2                   177c67d63aab2       etcd-default-k8s-diff-port-653942
	8e135f634e26f       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   14 minutes ago      Running             kube-apiserver            2                   697d63ff93426       kube-apiserver-default-k8s-diff-port-653942
	d4ccaef892bf1       39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533   19 minutes ago      Exited              kube-apiserver            1                   a3ce46d606015       kube-apiserver-default-k8s-diff-port-653942
	
	
	==> coredns [9c41790569cbe9a105aa5b3e904bdf461b0c6b1b9e64053f82f73cdff95cea28] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [a5b1e8894217a61058b0ac838ea292710915b108ad0417eb72b6302ddaf9e3d2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-653942
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-653942
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388
	                    minikube.k8s.io/name=default-k8s-diff-port-653942
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_16T01_05_49_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 16 Apr 2024 01:05:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-653942
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 16 Apr 2024 01:20:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 16 Apr 2024 01:16:22 +0000   Tue, 16 Apr 2024 01:05:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 16 Apr 2024 01:16:22 +0000   Tue, 16 Apr 2024 01:05:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 16 Apr 2024 01:16:22 +0000   Tue, 16 Apr 2024 01:05:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 16 Apr 2024 01:16:22 +0000   Tue, 16 Apr 2024 01:05:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.216
	  Hostname:    default-k8s-diff-port-653942
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a847583a5c44672beb47f4464de43eb
	  System UUID:                8a847583-a5c4-4672-beb4-7f4464de43eb
	  Boot ID:                    46ed85a2-6e5a-4b5c-9aa4-3746289b10c2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.3
	  Kube-Proxy Version:         v1.29.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-5nnpv                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-76f75df574-zpnhs                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-default-k8s-diff-port-653942                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-default-k8s-diff-port-653942             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-653942    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-mg5km                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-default-k8s-diff-port-653942             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-6jn29                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node default-k8s-diff-port-653942 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node default-k8s-diff-port-653942 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node default-k8s-diff-port-653942 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node default-k8s-diff-port-653942 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node default-k8s-diff-port-653942 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node default-k8s-diff-port-653942 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14m                kubelet          Node default-k8s-diff-port-653942 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m                kubelet          Node default-k8s-diff-port-653942 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node default-k8s-diff-port-653942 event: Registered Node default-k8s-diff-port-653942 in Controller
	
	
	==> dmesg <==
	[  +0.042127] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.810702] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.843816] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.721447] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.796486] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.059330] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070969] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +0.171029] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.148114] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[  +0.309523] systemd-fstab-generator[714]: Ignoring "noauto" option for root device
	[  +4.813783] systemd-fstab-generator[811]: Ignoring "noauto" option for root device
	[  +0.061111] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.991682] systemd-fstab-generator[934]: Ignoring "noauto" option for root device
	[  +5.617745] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.649493] kauditd_printk_skb: 79 callbacks suppressed
	[Apr16 01:01] kauditd_printk_skb: 2 callbacks suppressed
	[Apr16 01:05] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.989661] systemd-fstab-generator[3584]: Ignoring "noauto" option for root device
	[  +4.767032] kauditd_printk_skb: 56 callbacks suppressed
	[  +2.519695] systemd-fstab-generator[3912]: Ignoring "noauto" option for root device
	[Apr16 01:06] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.243119] systemd-fstab-generator[4215]: Ignoring "noauto" option for root device
	[Apr16 01:07] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [6cb8787026c8eadc21223e448145ec1f2032be0623aa0e3d20d4a5680f0d26fb] <==
	{"level":"info","ts":"2024-04-16T01:05:44.234477Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d4554daae9381f94","initial-advertise-peer-urls":["https://192.168.50.216:2380"],"listen-peer-urls":["https://192.168.50.216:2380"],"advertise-client-urls":["https://192.168.50.216:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.216:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-16T01:05:44.234539Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-16T01:05:44.234622Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.216:2380"}
	{"level":"info","ts":"2024-04-16T01:05:44.234656Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.216:2380"}
	{"level":"info","ts":"2024-04-16T01:05:45.14099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4554daae9381f94 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-16T01:05:45.141142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4554daae9381f94 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-16T01:05:45.141208Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4554daae9381f94 received MsgPreVoteResp from d4554daae9381f94 at term 1"}
	{"level":"info","ts":"2024-04-16T01:05:45.141254Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4554daae9381f94 became candidate at term 2"}
	{"level":"info","ts":"2024-04-16T01:05:45.141278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4554daae9381f94 received MsgVoteResp from d4554daae9381f94 at term 2"}
	{"level":"info","ts":"2024-04-16T01:05:45.141305Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d4554daae9381f94 became leader at term 2"}
	{"level":"info","ts":"2024-04-16T01:05:45.141331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d4554daae9381f94 elected leader d4554daae9381f94 at term 2"}
	{"level":"info","ts":"2024-04-16T01:05:45.142908Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"d4554daae9381f94","local-member-attributes":"{Name:default-k8s-diff-port-653942 ClientURLs:[https://192.168.50.216:2379]}","request-path":"/0/members/d4554daae9381f94/attributes","cluster-id":"a1cf388ad59b0b48","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-16T01:05:45.143137Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:05:45.143437Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T01:05:45.1439Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-16T01:05:45.145546Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-16T01:05:45.145625Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-16T01:05:45.156795Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-16T01:05:45.145659Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a1cf388ad59b0b48","local-member-id":"d4554daae9381f94","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:05:45.15704Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:05:45.157111Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-16T01:05:45.147133Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.216:2379"}
	{"level":"info","ts":"2024-04-16T01:15:45.176144Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":718}
	{"level":"info","ts":"2024-04-16T01:15:45.186008Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":718,"took":"9.463036ms","hash":3859155303,"current-db-size-bytes":2281472,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2281472,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-04-16T01:15:45.186074Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3859155303,"revision":718,"compact-revision":-1}
	
	
	==> kernel <==
	 01:20:39 up 20 min,  0 users,  load average: 0.39, 0.26, 0.20
	Linux default-k8s-diff-port-653942 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [8e135f634e26f7cb1cb96c36197c8120329ba60121e04fcf857d389b138d5879] <==
	I0416 01:13:47.545475       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:15:46.545219       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:15:46.545343       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0416 01:15:47.546281       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:15:47.546347       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 01:15:47.546360       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:15:47.546300       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:15:47.546443       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 01:15:47.547657       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:16:47.547115       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:16:47.547432       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 01:16:47.547469       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:16:47.548241       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:16:47.548341       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 01:16:47.549365       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:18:47.548891       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:18:47.548967       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0416 01:18:47.548976       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0416 01:18:47.549462       1 handler_proxy.go:93] no RequestInfo found in the context
	E0416 01:18:47.549550       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0416 01:18:47.550176       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [d4ccaef892bf194c7025d3249065ddab09c8dd78f7be86a4b7b0aa63921817f7] <==
	W0416 01:05:34.695812       1 logging.go:59] [core] [Channel #199 SubChannel #200] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:34.752678       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:34.769653       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:34.775305       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:34.793993       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:34.838074       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:34.900339       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.017275       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.070679       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.079458       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.106085       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.222459       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.230706       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.279821       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.345946       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.395875       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.589005       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.664244       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.666626       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.717582       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.804119       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:35.983496       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:36.125700       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:36.193766       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0416 01:05:38.329485       1 logging.go:59] [core] [Channel #199 SubChannel #200] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [73dd87507a5ddd51947b5702eca37ec8a3c1d395ce443b1ca82fbd1d955329e6] <==
	I0416 01:15:03.464341       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:15:32.965923       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:15:33.472974       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:16:02.973168       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:16:03.482189       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:16:32.978511       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:16:33.490872       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0416 01:17:01.919337       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="85.248µs"
	E0416 01:17:02.983816       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:17:03.498236       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0416 01:17:12.919627       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="136.666µs"
	E0416 01:17:32.989093       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:17:33.506323       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:18:02.995630       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:18:03.514305       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:18:33.001360       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:18:33.522498       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:19:03.013988       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:19:03.532455       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:19:33.018861       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:19:33.540542       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:20:03.026276       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:20:03.549176       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0416 01:20:33.036824       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0416 01:20:33.558792       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [7ffc152b91a92e5cdc34e606f40d065686234148d7797e82423044fa41c461cd] <==
	I0416 01:06:03.771090       1 server_others.go:72] "Using iptables proxy"
	I0416 01:06:03.797866       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.216"]
	I0416 01:06:04.044949       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0416 01:06:04.044995       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0416 01:06:04.045014       1 server_others.go:168] "Using iptables Proxier"
	I0416 01:06:04.052635       1 proxier.go:245] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0416 01:06:04.052974       1 server.go:865] "Version info" version="v1.29.3"
	I0416 01:06:04.053009       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0416 01:06:04.073548       1 config.go:188] "Starting service config controller"
	I0416 01:06:04.073610       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0416 01:06:04.073639       1 config.go:97] "Starting endpoint slice config controller"
	I0416 01:06:04.073643       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0416 01:06:04.079137       1 config.go:315] "Starting node config controller"
	I0416 01:06:04.079185       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0416 01:06:04.174853       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0416 01:06:04.174922       1 shared_informer.go:318] Caches are synced for service config
	I0416 01:06:04.180343       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [790f3485688cb7e8ce2b6eaa4acfac3f546e15f3b3d3011fb7d3babb7e28d508] <==
	W0416 01:05:46.569587       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 01:05:46.569618       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 01:05:47.434344       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0416 01:05:47.434456       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0416 01:05:47.471390       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0416 01:05:47.471452       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0416 01:05:47.537150       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0416 01:05:47.537203       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0416 01:05:47.543626       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0416 01:05:47.543650       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0416 01:05:47.672283       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0416 01:05:47.672348       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0416 01:05:47.691142       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0416 01:05:47.691193       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0416 01:05:47.708970       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0416 01:05:47.709067       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0416 01:05:47.821929       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0416 01:05:47.821979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0416 01:05:47.838840       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0416 01:05:47.838905       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0416 01:05:47.839038       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0416 01:05:47.839103       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0416 01:05:47.892166       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0416 01:05:47.892258       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0416 01:05:50.756132       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 16 01:17:49 default-k8s-diff-port-653942 kubelet[3919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 01:17:50 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:17:50.905146    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:18:01 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:18:01.902701    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:18:16 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:18:16.902819    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:18:29 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:18:29.904531    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:18:41 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:18:41.902385    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:18:49 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:18:49.917151    3919 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 01:18:49 default-k8s-diff-port-653942 kubelet[3919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 01:18:49 default-k8s-diff-port-653942 kubelet[3919]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 01:18:49 default-k8s-diff-port-653942 kubelet[3919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 01:18:49 default-k8s-diff-port-653942 kubelet[3919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 01:18:52 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:18:52.902460    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:19:06 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:19:06.902706    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:19:17 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:19:17.902602    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:19:29 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:19:29.903160    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:19:43 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:19:43.903091    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:19:49 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:19:49.918538    3919 iptables.go:575] "Could not set up iptables canary" err=<
	Apr 16 01:19:49 default-k8s-diff-port-653942 kubelet[3919]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 16 01:19:49 default-k8s-diff-port-653942 kubelet[3919]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 16 01:19:49 default-k8s-diff-port-653942 kubelet[3919]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 16 01:19:49 default-k8s-diff-port-653942 kubelet[3919]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 16 01:19:56 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:19:56.903598    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:20:09 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:20:09.903098    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:20:23 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:20:23.902646    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	Apr 16 01:20:36 default-k8s-diff-port-653942 kubelet[3919]: E0416 01:20:36.901586    3919 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-6jn29" podUID="1eec2ffb-ce59-45cb-b6b4-cd010549510e"
	
	
	==> storage-provisioner [c4ddd594d7334d0604fd41c36bda70b56c09e95f393c080374978e5783c53f6d] <==
	I0416 01:06:05.701910       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0416 01:06:05.716027       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0416 01:06:05.716241       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0416 01:06:05.730425       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0416 01:06:05.730608       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f16da03d-fdc8-497a-a095-8aa7bb11d1c5", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-653942_26b7dafc-dcf7-4430-9baa-acde71280843 became leader
	I0416 01:06:05.730983       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-653942_26b7dafc-dcf7-4430-9baa-acde71280843!
	I0416 01:06:05.831947       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-653942_26b7dafc-dcf7-4430-9baa-acde71280843!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-653942 -n default-k8s-diff-port-653942
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-653942 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-6jn29
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-653942 describe pod metrics-server-57f55c9bc5-6jn29
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-653942 describe pod metrics-server-57f55c9bc5-6jn29: exit status 1 (62.097197ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-6jn29" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-653942 describe pod metrics-server-57f55c9bc5-6jn29: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (328.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (178.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
E0416 01:17:20.169645   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
E0416 01:18:58.680162   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.83.98:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.83.98:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-800769 -n old-k8s-version-800769
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-800769 -n old-k8s-version-800769: exit status 2 (248.083507ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-800769" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-800769 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-800769 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.405µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-800769 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-800769 -n old-k8s-version-800769
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-800769 -n old-k8s-version-800769: exit status 2 (237.779288ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-800769 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-800769 logs -n 25: (1.489576909s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	| delete  | -p cert-expiration-359535                              | cert-expiration-359535       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:52 UTC | 16 Apr 24 00:52 UTC |
	| start   | -p newest-cni-012509 --memory=2200 --alsologtostderr   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:52 UTC | 16 Apr 24 00:53 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p newest-cni-012509             | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p newest-cni-012509                  | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p newest-cni-012509 --memory=2200 --alsologtostderr   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:53 UTC | 16 Apr 24 00:54 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |                |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |                |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |                |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |                |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| image   | newest-cni-012509 image list                           | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --format=json                                          |                              |         |                |                     |                     |
	| pause   | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| unpause | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |                |                     |                     |
	| delete  | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	| delete  | -p newest-cni-012509                                   | newest-cni-012509            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	| delete  | -p                                                     | disable-driver-mounts-988802 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:54 UTC |
	|         | disable-driver-mounts-988802                           |                              |         |                |                     |                     |
	| start   | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 00:56 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-653942       | default-k8s-diff-port-653942 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p no-preload-572602                  | no-preload-572602            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-653942 | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 01:06 UTC |
	|         | default-k8s-diff-port-653942                           |                              |         |                |                     |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-800769        | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| start   | -p no-preload-572602                                   | no-preload-572602            | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:54 UTC | 16 Apr 24 01:05 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |                |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2                      |                              |         |                |                     |                     |
	| addons  | enable metrics-server -p embed-certs-617092            | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |                |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |                |                     |                     |
	| stop    | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| stop    | -p old-k8s-version-800769                              | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |                |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-800769             | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC | 16 Apr 24 00:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p old-k8s-version-800769                              | old-k8s-version-800769       | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:56 UTC |                     |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --kvm-network=default                                  |                              |         |                |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |                |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |                |                     |                     |
	|         | --keep-context=false                                   |                              |         |                |                     |                     |
	|         | --driver=kvm2                                          |                              |         |                |                     |                     |
	|         | --container-runtime=crio                               |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |                |                     |                     |
	| addons  | enable dashboard -p embed-certs-617092                 | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:58 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |                |                     |                     |
	| start   | -p embed-certs-617092                                  | embed-certs-617092           | jenkins | v1.33.0-beta.0 | 16 Apr 24 00:58 UTC | 16 Apr 24 01:05 UTC |
	|         | --memory=2200                                          |                              |         |                |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |                |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |                |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                           |                              |         |                |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/16 00:58:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0416 00:58:42.797832   62747 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:58:42.797983   62747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:58:42.797994   62747 out.go:304] Setting ErrFile to fd 2...
	I0416 00:58:42.797998   62747 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:58:42.798182   62747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:58:42.798686   62747 out.go:298] Setting JSON to false
	I0416 00:58:42.799629   62747 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":6067,"bootTime":1713223056,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 00:58:42.799687   62747 start.go:139] virtualization: kvm guest
	I0416 00:58:42.801878   62747 out.go:177] * [embed-certs-617092] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 00:58:42.803202   62747 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 00:58:42.804389   62747 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 00:58:42.803288   62747 notify.go:220] Checking for updates...
	I0416 00:58:42.805742   62747 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 00:58:42.807023   62747 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:58:42.808185   62747 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 00:58:42.809402   62747 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 00:58:42.811188   62747 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:58:42.811772   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:58:42.811833   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:58:42.826377   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44973
	I0416 00:58:42.826730   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:58:42.827217   62747 main.go:141] libmachine: Using API Version  1
	I0416 00:58:42.827233   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:58:42.827541   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:58:42.827737   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 00:58:42.827964   62747 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 00:58:42.828239   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:58:42.828274   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:58:42.842499   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34791
	I0416 00:58:42.842872   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:58:42.843283   62747 main.go:141] libmachine: Using API Version  1
	I0416 00:58:42.843300   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:58:42.843636   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:58:42.843830   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 00:58:42.874583   62747 out.go:177] * Using the kvm2 driver based on existing profile
	I0416 00:58:42.875910   62747 start.go:297] selected driver: kvm2
	I0416 00:58:42.875933   62747 start.go:901] validating driver "kvm2" against &{Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26
280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:58:42.876072   62747 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 00:58:42.876741   62747 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:58:42.876826   62747 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0416 00:58:42.890834   62747 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0416 00:58:42.891212   62747 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 00:58:42.891270   62747 cni.go:84] Creating CNI manager for ""
	I0416 00:58:42.891283   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:58:42.891314   62747 start.go:340] cluster config:
	{Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-617092 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:58:42.891412   62747 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0416 00:58:42.893179   62747 out.go:177] * Starting "embed-certs-617092" primary control-plane node in "embed-certs-617092" cluster
	I0416 00:58:42.894232   62747 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 00:58:42.894260   62747 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0416 00:58:42.894267   62747 cache.go:56] Caching tarball of preloaded images
	I0416 00:58:42.894353   62747 preload.go:173] Found /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0416 00:58:42.894365   62747 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on crio
	I0416 00:58:42.894458   62747 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/config.json ...
	I0416 00:58:42.894628   62747 start.go:360] acquireMachinesLock for embed-certs-617092: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:58:47.545405   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:58:50.617454   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:58:56.697459   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:58:59.769461   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:05.849462   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:08.921459   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:15.001430   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:21.078070   61500 start.go:364] duration metric: took 4m33.431027521s to acquireMachinesLock for "no-preload-572602"
	I0416 00:59:21.078134   61500 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:59:21.078152   61500 fix.go:54] fixHost starting: 
	I0416 00:59:21.078760   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:59:21.078809   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:59:21.093476   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36767
	I0416 00:59:21.093934   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:59:21.094422   61500 main.go:141] libmachine: Using API Version  1
	I0416 00:59:21.094448   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:59:21.094749   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:59:21.094902   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:21.095048   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 00:59:21.096678   61500 fix.go:112] recreateIfNeeded on no-preload-572602: state=Stopped err=<nil>
	I0416 00:59:21.096697   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	W0416 00:59:21.096846   61500 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:59:21.098527   61500 out.go:177] * Restarting existing kvm2 VM for "no-preload-572602" ...
	I0416 00:59:18.073453   61267 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.216:22: connect: no route to host
	I0416 00:59:21.075633   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:59:21.075671   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 00:59:21.075991   61267 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653942"
	I0416 00:59:21.076014   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 00:59:21.076225   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 00:59:21.077923   61267 machine.go:97] duration metric: took 4m34.542024225s to provisionDockerMachine
	I0416 00:59:21.077967   61267 fix.go:56] duration metric: took 4m34.567596715s for fixHost
	I0416 00:59:21.077978   61267 start.go:83] releasing machines lock for "default-k8s-diff-port-653942", held for 4m34.567645643s
	W0416 00:59:21.078001   61267 start.go:713] error starting host: provision: host is not running
	W0416 00:59:21.078088   61267 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0416 00:59:21.078097   61267 start.go:728] Will try again in 5 seconds ...
	I0416 00:59:21.099788   61500 main.go:141] libmachine: (no-preload-572602) Calling .Start
	I0416 00:59:21.099966   61500 main.go:141] libmachine: (no-preload-572602) Ensuring networks are active...
	I0416 00:59:21.100656   61500 main.go:141] libmachine: (no-preload-572602) Ensuring network default is active
	I0416 00:59:21.100937   61500 main.go:141] libmachine: (no-preload-572602) Ensuring network mk-no-preload-572602 is active
	I0416 00:59:21.101282   61500 main.go:141] libmachine: (no-preload-572602) Getting domain xml...
	I0416 00:59:21.101905   61500 main.go:141] libmachine: (no-preload-572602) Creating domain...
	I0416 00:59:22.294019   61500 main.go:141] libmachine: (no-preload-572602) Waiting to get IP...
	I0416 00:59:22.294922   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:22.295294   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:22.295349   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:22.295262   62936 retry.go:31] will retry after 220.952312ms: waiting for machine to come up
	I0416 00:59:22.517753   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:22.518334   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:22.518358   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:22.518287   62936 retry.go:31] will retry after 377.547009ms: waiting for machine to come up
	I0416 00:59:26.081716   61267 start.go:360] acquireMachinesLock for default-k8s-diff-port-653942: {Name:mk92bff49461487f8cebf2747ccf61ccb9c772a2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0416 00:59:22.897924   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:22.898442   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:22.898465   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:22.898394   62936 retry.go:31] will retry after 450.415086ms: waiting for machine to come up
	I0416 00:59:23.349893   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:23.350383   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:23.350420   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:23.350333   62936 retry.go:31] will retry after 385.340718ms: waiting for machine to come up
	I0416 00:59:23.736854   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:23.737225   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:23.737262   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:23.737205   62936 retry.go:31] will retry after 696.175991ms: waiting for machine to come up
	I0416 00:59:24.435231   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:24.435587   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:24.435616   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:24.435557   62936 retry.go:31] will retry after 644.402152ms: waiting for machine to come up
	I0416 00:59:25.081355   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:25.081660   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:25.081697   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:25.081626   62936 retry.go:31] will retry after 809.585997ms: waiting for machine to come up
	I0416 00:59:25.892402   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:25.892767   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:25.892797   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:25.892722   62936 retry.go:31] will retry after 1.07477705s: waiting for machine to come up
	I0416 00:59:26.969227   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:26.969617   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:26.969646   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:26.969561   62936 retry.go:31] will retry after 1.243937595s: waiting for machine to come up
	I0416 00:59:28.214995   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:28.215412   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:28.215433   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:28.215364   62936 retry.go:31] will retry after 1.775188434s: waiting for machine to come up
	I0416 00:59:29.993420   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:29.993825   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:29.993853   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:29.993779   62936 retry.go:31] will retry after 2.73873778s: waiting for machine to come up
	I0416 00:59:32.735350   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:32.735758   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:32.735809   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:32.735721   62936 retry.go:31] will retry after 2.208871896s: waiting for machine to come up
	I0416 00:59:34.947005   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:34.947400   61500 main.go:141] libmachine: (no-preload-572602) DBG | unable to find current IP address of domain no-preload-572602 in network mk-no-preload-572602
	I0416 00:59:34.947431   61500 main.go:141] libmachine: (no-preload-572602) DBG | I0416 00:59:34.947358   62936 retry.go:31] will retry after 4.484880009s: waiting for machine to come up
	I0416 00:59:40.669954   62139 start.go:364] duration metric: took 3m18.466569456s to acquireMachinesLock for "old-k8s-version-800769"
	I0416 00:59:40.670015   62139 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:59:40.670038   62139 fix.go:54] fixHost starting: 
	I0416 00:59:40.670411   62139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:59:40.670448   62139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:59:40.686269   62139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39043
	I0416 00:59:40.686633   62139 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:59:40.687125   62139 main.go:141] libmachine: Using API Version  1
	I0416 00:59:40.687162   62139 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:59:40.687481   62139 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:59:40.687672   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:40.687838   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetState
	I0416 00:59:40.689108   62139 fix.go:112] recreateIfNeeded on old-k8s-version-800769: state=Stopped err=<nil>
	I0416 00:59:40.689132   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	W0416 00:59:40.689286   62139 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:59:40.691869   62139 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-800769" ...
	I0416 00:59:40.693292   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .Start
	I0416 00:59:40.693450   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring networks are active...
	I0416 00:59:40.694152   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring network default is active
	I0416 00:59:40.694457   62139 main.go:141] libmachine: (old-k8s-version-800769) Ensuring network mk-old-k8s-version-800769 is active
	I0416 00:59:40.694883   62139 main.go:141] libmachine: (old-k8s-version-800769) Getting domain xml...
	I0416 00:59:40.695720   62139 main.go:141] libmachine: (old-k8s-version-800769) Creating domain...
	I0416 00:59:41.913001   62139 main.go:141] libmachine: (old-k8s-version-800769) Waiting to get IP...
	I0416 00:59:41.913874   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:41.914260   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:41.914318   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:41.914237   63071 retry.go:31] will retry after 261.032707ms: waiting for machine to come up
	I0416 00:59:39.436244   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.436664   61500 main.go:141] libmachine: (no-preload-572602) Found IP for machine: 192.168.39.121
	I0416 00:59:39.436686   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has current primary IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.436694   61500 main.go:141] libmachine: (no-preload-572602) Reserving static IP address...
	I0416 00:59:39.437114   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "no-preload-572602", mac: "52:54:00:fb:a5:f3", ip: "192.168.39.121"} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.437151   61500 main.go:141] libmachine: (no-preload-572602) Reserved static IP address: 192.168.39.121
	I0416 00:59:39.437183   61500 main.go:141] libmachine: (no-preload-572602) DBG | skip adding static IP to network mk-no-preload-572602 - found existing host DHCP lease matching {name: "no-preload-572602", mac: "52:54:00:fb:a5:f3", ip: "192.168.39.121"}
	I0416 00:59:39.437197   61500 main.go:141] libmachine: (no-preload-572602) Waiting for SSH to be available...
	I0416 00:59:39.437215   61500 main.go:141] libmachine: (no-preload-572602) DBG | Getting to WaitForSSH function...
	I0416 00:59:39.439255   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.439613   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.439642   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.439723   61500 main.go:141] libmachine: (no-preload-572602) DBG | Using SSH client type: external
	I0416 00:59:39.439756   61500 main.go:141] libmachine: (no-preload-572602) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa (-rw-------)
	I0416 00:59:39.439799   61500 main.go:141] libmachine: (no-preload-572602) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.121 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 00:59:39.439822   61500 main.go:141] libmachine: (no-preload-572602) DBG | About to run SSH command:
	I0416 00:59:39.439835   61500 main.go:141] libmachine: (no-preload-572602) DBG | exit 0
	I0416 00:59:39.565190   61500 main.go:141] libmachine: (no-preload-572602) DBG | SSH cmd err, output: <nil>: 
	I0416 00:59:39.565584   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetConfigRaw
	I0416 00:59:39.566223   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:39.568572   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.568869   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.568906   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.569083   61500 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/config.json ...
	I0416 00:59:39.569300   61500 machine.go:94] provisionDockerMachine start ...
	I0416 00:59:39.569318   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:39.569526   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.571536   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.571842   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.571868   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.572004   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:39.572189   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.572352   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.572505   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:39.572751   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:39.572974   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:39.572991   61500 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:59:39.681544   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 00:59:39.681574   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetMachineName
	I0416 00:59:39.681845   61500 buildroot.go:166] provisioning hostname "no-preload-572602"
	I0416 00:59:39.681874   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetMachineName
	I0416 00:59:39.682088   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.684694   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.685029   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.685063   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.685259   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:39.685453   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.685608   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.685737   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:39.685887   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:39.686066   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:39.686090   61500 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-572602 && echo "no-preload-572602" | sudo tee /etc/hostname
	I0416 00:59:39.804124   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-572602
	
	I0416 00:59:39.804149   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.807081   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.807447   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.807480   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.807651   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:39.807860   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.808048   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:39.808202   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:39.808393   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:39.808618   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:39.808644   61500 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-572602' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-572602/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-572602' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:59:39.921781   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:59:39.921824   61500 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:59:39.921847   61500 buildroot.go:174] setting up certificates
	I0416 00:59:39.921857   61500 provision.go:84] configureAuth start
	I0416 00:59:39.921872   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetMachineName
	I0416 00:59:39.922150   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:39.924726   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.925052   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.925081   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.925199   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:39.927315   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.927820   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:39.927869   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:39.927934   61500 provision.go:143] copyHostCerts
	I0416 00:59:39.928005   61500 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:59:39.928031   61500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:59:39.928122   61500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:59:39.928231   61500 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:59:39.928241   61500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:59:39.928284   61500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:59:39.928370   61500 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:59:39.928379   61500 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:59:39.928428   61500 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:59:39.928498   61500 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.no-preload-572602 san=[127.0.0.1 192.168.39.121 localhost minikube no-preload-572602]
	I0416 00:59:40.000129   61500 provision.go:177] copyRemoteCerts
	I0416 00:59:40.000200   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:59:40.000236   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.002726   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.003028   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.003057   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.003168   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.003351   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.003471   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.003577   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.087468   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:59:40.115336   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0416 00:59:40.142695   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 00:59:40.169631   61500 provision.go:87] duration metric: took 247.759459ms to configureAuth
	I0416 00:59:40.169657   61500 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:59:40.169824   61500 config.go:182] Loaded profile config "no-preload-572602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 00:59:40.169906   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.172164   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.172503   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.172531   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.172689   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.172875   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.173033   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.173182   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.173311   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:40.173465   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:40.173480   61500 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:59:40.437143   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:59:40.437182   61500 machine.go:97] duration metric: took 867.868152ms to provisionDockerMachine
	I0416 00:59:40.437194   61500 start.go:293] postStartSetup for "no-preload-572602" (driver="kvm2")
	I0416 00:59:40.437211   61500 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:59:40.437233   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.437536   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:59:40.437564   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.440246   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.440596   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.440637   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.440759   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.440981   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.441186   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.441319   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.524157   61500 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:59:40.528556   61500 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:59:40.528580   61500 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:59:40.528647   61500 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:59:40.528756   61500 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:59:40.528877   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:59:40.538275   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:59:40.562693   61500 start.go:296] duration metric: took 125.48438ms for postStartSetup
	I0416 00:59:40.562728   61500 fix.go:56] duration metric: took 19.484586221s for fixHost
	I0416 00:59:40.562746   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.565410   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.565717   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.565756   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.565920   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.566103   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.566269   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.566438   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.566587   61500 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:40.566738   61500 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.39.121 22 <nil> <nil>}
	I0416 00:59:40.566749   61500 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 00:59:40.669778   61500 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229180.641382554
	
	I0416 00:59:40.669802   61500 fix.go:216] guest clock: 1713229180.641382554
	I0416 00:59:40.669811   61500 fix.go:229] Guest: 2024-04-16 00:59:40.641382554 +0000 UTC Remote: 2024-04-16 00:59:40.56273146 +0000 UTC m=+293.069651959 (delta=78.651094ms)
	I0416 00:59:40.669839   61500 fix.go:200] guest clock delta is within tolerance: 78.651094ms
	I0416 00:59:40.669857   61500 start.go:83] releasing machines lock for "no-preload-572602", held for 19.591740017s
	I0416 00:59:40.669883   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.670163   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:40.672800   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.673187   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.673234   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.673386   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.673841   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.673993   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 00:59:40.674067   61500 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:59:40.674115   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.674155   61500 ssh_runner.go:195] Run: cat /version.json
	I0416 00:59:40.674174   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 00:59:40.676617   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.676776   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.677006   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.677030   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.677126   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.677277   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:40.677299   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:40.677336   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.677499   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.677511   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 00:59:40.677635   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.677768   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 00:59:40.678072   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 00:59:40.678224   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 00:59:40.787049   61500 ssh_runner.go:195] Run: systemctl --version
	I0416 00:59:40.793568   61500 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:59:40.941445   61500 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 00:59:40.949062   61500 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:59:40.949177   61500 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:59:40.966425   61500 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 00:59:40.966454   61500 start.go:494] detecting cgroup driver to use...
	I0416 00:59:40.966525   61500 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:59:40.985126   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:59:40.999931   61500 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:59:41.000004   61500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:59:41.015597   61500 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:59:41.030610   61500 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:59:41.151240   61500 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 00:59:41.312384   61500 docker.go:233] disabling docker service ...
	I0416 00:59:41.312464   61500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 00:59:41.329263   61500 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 00:59:41.345192   61500 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 00:59:41.463330   61500 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 00:59:41.595259   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 00:59:41.610495   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 00:59:41.632527   61500 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 00:59:41.632580   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.644625   61500 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 00:59:41.644723   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.656056   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.667069   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.682783   61500 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 00:59:41.694760   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.712505   61500 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.737338   61500 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 00:59:41.747518   61500 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 00:59:41.756586   61500 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 00:59:41.756656   61500 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 00:59:41.769230   61500 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 00:59:41.778424   61500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:59:41.894135   61500 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 00:59:42.039732   61500 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 00:59:42.039812   61500 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 00:59:42.044505   61500 start.go:562] Will wait 60s for crictl version
	I0416 00:59:42.044551   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.049632   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 00:59:42.106886   61500 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 00:59:42.106981   61500 ssh_runner.go:195] Run: crio --version
	I0416 00:59:42.137092   61500 ssh_runner.go:195] Run: crio --version
	I0416 00:59:42.170036   61500 out.go:177] * Preparing Kubernetes v1.30.0-rc.2 on CRI-O 1.29.1 ...
	I0416 00:59:42.171395   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetIP
	I0416 00:59:42.174790   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:42.175217   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 00:59:42.175250   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 00:59:42.175506   61500 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0416 00:59:42.180987   61500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:59:42.198472   61500 kubeadm.go:877] updating cluster {Name:no-preload-572602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.0-rc.2 ClusterName:no-preload-572602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 00:59:42.198595   61500 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0416 00:59:42.198639   61500 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 00:59:42.236057   61500 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.0-rc.2". assuming images are not preloaded.
	I0416 00:59:42.236084   61500 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.30.0-rc.2 registry.k8s.io/kube-controller-manager:v1.30.0-rc.2 registry.k8s.io/kube-scheduler:v1.30.0-rc.2 registry.k8s.io/kube-proxy:v1.30.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.12-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 00:59:42.236146   61500 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:42.236166   61500 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.236180   61500 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.236182   61500 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.236212   61500 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.236238   61500 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0416 00:59:42.236287   61500 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.236164   61500 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.237740   61500 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.237756   61500 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0416 00:59:42.237763   61500 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.237779   61500 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.237740   61500 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.237848   61500 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.237847   61500 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:42.238087   61500 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.410682   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.445824   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.446874   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0416 00:59:42.448854   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.449450   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.452121   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.458966   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.480556   61500 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.30.0-rc.2" does not exist at hash "461015b94df4b9e0beae6963e44faa05142f2bddf16b1956a2c09ccefe0416a6" in container runtime
	I0416 00:59:42.480608   61500 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.480670   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.176660   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.177053   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.177084   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.177031   63071 retry.go:31] will retry after 268.951362ms: waiting for machine to come up
	I0416 00:59:42.447724   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.448132   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.448159   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.448097   63071 retry.go:31] will retry after 293.793417ms: waiting for machine to come up
	I0416 00:59:42.743375   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:42.743845   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:42.743874   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:42.743801   63071 retry.go:31] will retry after 494.163372ms: waiting for machine to come up
	I0416 00:59:43.239314   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:43.239761   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:43.239790   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:43.239708   63071 retry.go:31] will retry after 698.851999ms: waiting for machine to come up
	I0416 00:59:43.939998   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:43.940577   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:43.940607   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:43.940535   63071 retry.go:31] will retry after 764.693004ms: waiting for machine to come up
	I0416 00:59:44.706335   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:44.706673   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:44.706724   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:44.706626   63071 retry.go:31] will retry after 874.082115ms: waiting for machine to come up
	I0416 00:59:45.581896   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:45.582331   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:45.582361   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:45.582280   63071 retry.go:31] will retry after 966.259345ms: waiting for machine to come up
	I0416 00:59:46.550671   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:46.551111   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:46.551140   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:46.551062   63071 retry.go:31] will retry after 1.191034468s: waiting for machine to come up
	I0416 00:59:42.583284   61500 cache_images.go:116] "registry.k8s.io/etcd:3.5.12-0" needs transfer: "registry.k8s.io/etcd:3.5.12-0" does not exist at hash "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899" in container runtime
	I0416 00:59:42.583332   61500 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.583377   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.724785   61500 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.30.0-rc.2" does not exist at hash "ae2ef7918948cfa32fb3b9cba56f6140b9e23022b7ed81960e2e83a14990532b" in container runtime
	I0416 00:59:42.724827   61500 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.724878   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.724899   61500 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0416 00:59:42.724938   61500 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.724938   61500 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.30.0-rc.2" does not exist at hash "35c7fe5cdbee52250b1e2b15640a06c8ebff60dbc795a7701deb6309be51431e" in container runtime
	I0416 00:59:42.724964   61500 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.724979   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.724993   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.725019   61500 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.30.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.30.0-rc.2" does not exist at hash "65a750108e0b64670c3a9f33978b6a33b0d5cdaca85158e0c637c5a4e84539c1" in container runtime
	I0416 00:59:42.725051   61500 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.725063   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.30.0-rc.2
	I0416 00:59:42.725088   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:42.725102   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.12-0
	I0416 00:59:42.739346   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0416 00:59:42.739764   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.30.0-rc.2
	I0416 00:59:42.787888   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.30.0-rc.2
	I0416 00:59:42.787977   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.788024   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.30.0-rc.2
	I0416 00:59:42.788084   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.815167   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0416 00:59:42.815274   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.12-0
	I0416 00:59:42.845627   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0416 00:59:42.845741   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0416 00:59:42.848065   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:42.848134   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:42.880543   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:42.880557   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2 (exists)
	I0416 00:59:42.880575   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.880628   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2
	I0416 00:59:42.880648   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:42.907207   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.12-0 (exists)
	I0416 00:59:42.907245   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0416 00:59:42.907269   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.2
	I0416 00:59:42.907295   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2 (exists)
	I0416 00:59:42.907334   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2 (exists)
	I0416 00:59:42.907350   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 00:59:43.138705   61500 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:44.951278   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.30.0-rc.2: (2.07061835s)
	I0416 00:59:44.951295   61500 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2: (2.04392036s)
	I0416 00:59:44.951348   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2 (exists)
	I0416 00:59:44.951309   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.0-rc.2 from cache
	I0416 00:59:44.951364   61500 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.812619758s)
	I0416 00:59:44.951410   61500 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0416 00:59:44.951448   61500 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:44.951374   61500 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.12-0
	I0416 00:59:44.951506   61500 ssh_runner.go:195] Run: which crictl
	I0416 00:59:44.951508   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0
	I0416 00:59:47.744187   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:47.744683   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:47.744712   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:47.744637   63071 retry.go:31] will retry after 2.263605663s: waiting for machine to come up
	I0416 00:59:50.011136   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:50.011605   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:50.011632   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:50.011566   63071 retry.go:31] will retry after 2.648982849s: waiting for machine to come up
	I0416 00:59:48.656623   61500 ssh_runner.go:235] Completed: which crictl: (3.705085257s)
	I0416 00:59:48.656705   61500 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 00:59:48.656715   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.12-0: (3.705109475s)
	I0416 00:59:48.656743   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 from cache
	I0416 00:59:48.656769   61500 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0416 00:59:48.656798   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0416 00:59:50.560030   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.903209359s)
	I0416 00:59:50.560071   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0416 00:59:50.560085   61500 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.90335887s)
	I0416 00:59:50.560096   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:50.560148   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2
	I0416 00:59:50.560151   61500 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0416 00:59:50.560309   61500 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0416 00:59:52.662443   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:52.662852   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:52.662883   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:52.662815   63071 retry.go:31] will retry after 2.183508059s: waiting for machine to come up
	I0416 00:59:54.849225   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:54.849701   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | unable to find current IP address of domain old-k8s-version-800769 in network mk-old-k8s-version-800769
	I0416 00:59:54.849734   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | I0416 00:59:54.849649   63071 retry.go:31] will retry after 3.201585234s: waiting for machine to come up
	I0416 00:59:52.739620   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.30.0-rc.2: (2.179436189s)
	I0416 00:59:52.739658   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.0-rc.2 from cache
	I0416 00:59:52.739688   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:52.739697   61500 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.179365348s)
	I0416 00:59:52.739724   61500 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0416 00:59:52.739747   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2
	I0416 00:59:55.098350   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.30.0-rc.2: (2.358579586s)
	I0416 00:59:55.098381   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.0-rc.2 from cache
	I0416 00:59:55.098408   61500 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 00:59:55.098454   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2
	I0416 00:59:57.166586   61500 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.30.0-rc.2: (2.068105529s)
	I0416 00:59:57.166615   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.0-rc.2 from cache
	I0416 00:59:57.166644   61500 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0416 00:59:57.166697   61500 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0416 00:59:59.394339   62747 start.go:364] duration metric: took 1m16.499681915s to acquireMachinesLock for "embed-certs-617092"
	I0416 00:59:59.394389   62747 start.go:96] Skipping create...Using existing machine configuration
	I0416 00:59:59.394412   62747 fix.go:54] fixHost starting: 
	I0416 00:59:59.394834   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:59:59.394896   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:59:59.414712   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38637
	I0416 00:59:59.415464   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:59:59.416123   62747 main.go:141] libmachine: Using API Version  1
	I0416 00:59:59.416150   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:59:59.416436   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:59:59.416623   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 00:59:59.416786   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 00:59:59.418413   62747 fix.go:112] recreateIfNeeded on embed-certs-617092: state=Stopped err=<nil>
	I0416 00:59:59.418449   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	W0416 00:59:59.418609   62747 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 00:59:59.420560   62747 out.go:177] * Restarting existing kvm2 VM for "embed-certs-617092" ...
	I0416 00:59:58.052613   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.053048   62139 main.go:141] libmachine: (old-k8s-version-800769) Found IP for machine: 192.168.83.98
	I0416 00:59:58.053073   62139 main.go:141] libmachine: (old-k8s-version-800769) Reserving static IP address...
	I0416 00:59:58.053089   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has current primary IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.053517   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "old-k8s-version-800769", mac: "52:54:00:a1:ad:da", ip: "192.168.83.98"} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.053549   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | skip adding static IP to network mk-old-k8s-version-800769 - found existing host DHCP lease matching {name: "old-k8s-version-800769", mac: "52:54:00:a1:ad:da", ip: "192.168.83.98"}
	I0416 00:59:58.053569   62139 main.go:141] libmachine: (old-k8s-version-800769) Reserved static IP address: 192.168.83.98
	I0416 00:59:58.053587   62139 main.go:141] libmachine: (old-k8s-version-800769) Waiting for SSH to be available...
	I0416 00:59:58.053602   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Getting to WaitForSSH function...
	I0416 00:59:58.055598   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.055907   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.055941   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.056038   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Using SSH client type: external
	I0416 00:59:58.056088   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa (-rw-------)
	I0416 00:59:58.056132   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.98 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 00:59:58.056149   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | About to run SSH command:
	I0416 00:59:58.056162   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | exit 0
	I0416 00:59:58.185675   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | SSH cmd err, output: <nil>: 
	I0416 00:59:58.186055   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetConfigRaw
	I0416 00:59:58.186802   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:58.189772   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.190219   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.190257   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.190448   62139 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/config.json ...
	I0416 00:59:58.190666   62139 machine.go:94] provisionDockerMachine start ...
	I0416 00:59:58.190685   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:58.190902   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.193570   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.193954   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.193982   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.194139   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.194337   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.194492   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.194636   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.194786   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.195041   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.195056   62139 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 00:59:58.321824   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 00:59:58.321857   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.322146   62139 buildroot.go:166] provisioning hostname "old-k8s-version-800769"
	I0416 00:59:58.322175   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.322381   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.324941   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.325288   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.325316   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.325423   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.325613   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.325776   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.325936   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.326109   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.326322   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.326339   62139 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-800769 && echo "old-k8s-version-800769" | sudo tee /etc/hostname
	I0416 00:59:58.455194   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-800769
	
	I0416 00:59:58.455236   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.458021   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.458423   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.458458   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.458662   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.458848   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.459013   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.459162   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.459353   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.459507   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.459524   62139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-800769' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-800769/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-800769' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 00:59:58.587318   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 00:59:58.587351   62139 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 00:59:58.587391   62139 buildroot.go:174] setting up certificates
	I0416 00:59:58.587400   62139 provision.go:84] configureAuth start
	I0416 00:59:58.587413   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetMachineName
	I0416 00:59:58.587686   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:58.590415   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.590739   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.590778   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.590880   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.593282   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.593728   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.593759   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.593931   62139 provision.go:143] copyHostCerts
	I0416 00:59:58.593988   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 00:59:58.594007   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 00:59:58.594079   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 00:59:58.594213   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 00:59:58.594222   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 00:59:58.594263   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 00:59:58.594372   62139 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 00:59:58.594383   62139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 00:59:58.594408   62139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 00:59:58.594470   62139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-800769 san=[127.0.0.1 192.168.83.98 localhost minikube old-k8s-version-800769]
	I0416 00:59:58.692127   62139 provision.go:177] copyRemoteCerts
	I0416 00:59:58.692197   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 00:59:58.692232   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.694858   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.695231   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.695278   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.695507   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.695693   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.695852   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.695994   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:58.783458   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 00:59:58.811124   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0416 00:59:58.836495   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 00:59:58.862044   62139 provision.go:87] duration metric: took 274.632117ms to configureAuth
	I0416 00:59:58.862068   62139 buildroot.go:189] setting minikube options for container-runtime
	I0416 00:59:58.862278   62139 config.go:182] Loaded profile config "old-k8s-version-800769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0416 00:59:58.862361   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:58.865352   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.865795   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:58.865829   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:58.866043   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:58.866228   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.866435   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:58.866625   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:58.866805   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:58.867008   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:58.867026   62139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 00:59:59.143874   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 00:59:59.143900   62139 machine.go:97] duration metric: took 953.218972ms to provisionDockerMachine
	I0416 00:59:59.143914   62139 start.go:293] postStartSetup for "old-k8s-version-800769" (driver="kvm2")
	I0416 00:59:59.143927   62139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 00:59:59.143972   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.144277   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 00:59:59.144302   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.147021   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.147355   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.147385   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.147649   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.147871   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.148036   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.148174   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.236981   62139 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 00:59:59.241388   62139 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 00:59:59.241411   62139 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 00:59:59.241469   62139 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 00:59:59.241534   62139 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 00:59:59.241619   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 00:59:59.251688   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:59:59.275189   62139 start.go:296] duration metric: took 131.262042ms for postStartSetup
	I0416 00:59:59.275227   62139 fix.go:56] duration metric: took 18.605201288s for fixHost
	I0416 00:59:59.275250   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.277804   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.278153   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.278186   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.278341   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.278581   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.278741   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.278908   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.279068   62139 main.go:141] libmachine: Using SSH client type: native
	I0416 00:59:59.279233   62139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.83.98 22 <nil> <nil>}
	I0416 00:59:59.279243   62139 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 00:59:59.394108   62139 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229199.360202150
	
	I0416 00:59:59.394141   62139 fix.go:216] guest clock: 1713229199.360202150
	I0416 00:59:59.394152   62139 fix.go:229] Guest: 2024-04-16 00:59:59.36020215 +0000 UTC Remote: 2024-04-16 00:59:59.27523174 +0000 UTC m=+217.222314955 (delta=84.97041ms)
	I0416 00:59:59.394211   62139 fix.go:200] guest clock delta is within tolerance: 84.97041ms
	I0416 00:59:59.394218   62139 start.go:83] releasing machines lock for "old-k8s-version-800769", held for 18.724230851s
	I0416 00:59:59.394252   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.394554   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 00:59:59.397241   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.397670   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.397703   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.397897   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398460   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398650   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .DriverName
	I0416 00:59:59.398740   62139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 00:59:59.398782   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.399049   62139 ssh_runner.go:195] Run: cat /version.json
	I0416 00:59:59.399072   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHHostname
	I0416 00:59:59.401397   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401656   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401802   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.401825   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.401964   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 00:59:59.402017   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 00:59:59.402089   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.402173   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHPort
	I0416 00:59:59.402248   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.402320   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHKeyPath
	I0416 00:59:59.402376   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.402430   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetSSHUsername
	I0416 00:59:59.402577   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.402638   62139 sshutil.go:53] new ssh client: &{IP:192.168.83.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/old-k8s-version-800769/id_rsa Username:docker}
	I0416 00:59:59.481834   62139 ssh_runner.go:195] Run: systemctl --version
	I0416 00:59:59.516372   62139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 00:59:59.666722   62139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 00:59:59.674165   62139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 00:59:59.674226   62139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 00:59:59.695545   62139 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 00:59:59.695573   62139 start.go:494] detecting cgroup driver to use...
	I0416 00:59:59.695646   62139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 00:59:59.715091   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 00:59:59.732004   62139 docker.go:217] disabling cri-docker service (if available) ...
	I0416 00:59:59.732060   62139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 00:59:59.753217   62139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 00:59:59.768513   62139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 00:59:59.898693   62139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:00:00.066535   62139 docker.go:233] disabling docker service ...
	I0416 01:00:00.066607   62139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:00:00.084512   62139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:00:00.097714   62139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:00:00.232901   62139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:00:00.378379   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:00:00.395191   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:00:00.416631   62139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0416 01:00:00.416695   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.428712   62139 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:00:00.428774   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.442687   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.454631   62139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:00.466151   62139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:00:00.478459   62139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:00:00.489957   62139 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:00:00.490035   62139 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:00:00.506087   62139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:00:00.518100   62139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:00.676317   62139 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:00:00.869766   62139 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:00:00.869855   62139 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:00:00.875363   62139 start.go:562] Will wait 60s for crictl version
	I0416 01:00:00.875424   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:00.880947   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:00:00.924780   62139 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:00:00.924852   62139 ssh_runner.go:195] Run: crio --version
	I0416 01:00:00.958390   62139 ssh_runner.go:195] Run: crio --version
	I0416 01:00:00.993114   62139 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0416 01:00:00.994513   62139 main.go:141] libmachine: (old-k8s-version-800769) Calling .GetIP
	I0416 01:00:00.997571   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 01:00:00.998032   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:ad:da", ip: ""} in network mk-old-k8s-version-800769: {Iface:virbr3 ExpiryTime:2024-04-16 01:59:52 +0000 UTC Type:0 Mac:52:54:00:a1:ad:da Iaid: IPaddr:192.168.83.98 Prefix:24 Hostname:old-k8s-version-800769 Clientid:01:52:54:00:a1:ad:da}
	I0416 01:00:00.998065   62139 main.go:141] libmachine: (old-k8s-version-800769) DBG | domain old-k8s-version-800769 has defined IP address 192.168.83.98 and MAC address 52:54:00:a1:ad:da in network mk-old-k8s-version-800769
	I0416 01:00:00.998273   62139 ssh_runner.go:195] Run: grep 192.168.83.1	host.minikube.internal$ /etc/hosts
	I0416 01:00:01.002750   62139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.83.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:01.015709   62139 kubeadm.go:877] updating cluster {Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.98 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:00:01.015810   62139 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0416 01:00:01.015853   62139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:01.063257   62139 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 01:00:01.063331   62139 ssh_runner.go:195] Run: which lz4
	I0416 01:00:01.067973   62139 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:00:01.072369   62139 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:00:01.072400   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0416 00:59:57.817013   61500 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0416 00:59:57.817060   61500 cache_images.go:123] Successfully loaded all cached images
	I0416 00:59:57.817073   61500 cache_images.go:92] duration metric: took 15.580967615s to LoadCachedImages
	I0416 00:59:57.817087   61500 kubeadm.go:928] updating node { 192.168.39.121 8443 v1.30.0-rc.2 crio true true} ...
	I0416 00:59:57.817241   61500 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-572602 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.121
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0-rc.2 ClusterName:no-preload-572602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 00:59:57.817324   61500 ssh_runner.go:195] Run: crio config
	I0416 00:59:57.866116   61500 cni.go:84] Creating CNI manager for ""
	I0416 00:59:57.866140   61500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 00:59:57.866154   61500 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 00:59:57.866189   61500 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.121 APIServerPort:8443 KubernetesVersion:v1.30.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-572602 NodeName:no-preload-572602 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.121"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.121 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 00:59:57.866325   61500 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.121
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-572602"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.121
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.121"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 00:59:57.866390   61500 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0-rc.2
	I0416 00:59:57.876619   61500 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 00:59:57.876689   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 00:59:57.886472   61500 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0416 00:59:57.903172   61500 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0416 00:59:57.919531   61500 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0416 00:59:57.936394   61500 ssh_runner.go:195] Run: grep 192.168.39.121	control-plane.minikube.internal$ /etc/hosts
	I0416 00:59:57.940161   61500 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.121	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 00:59:57.951997   61500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 00:59:58.089553   61500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 00:59:58.117870   61500 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602 for IP: 192.168.39.121
	I0416 00:59:58.117926   61500 certs.go:194] generating shared ca certs ...
	I0416 00:59:58.117949   61500 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 00:59:58.118136   61500 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 00:59:58.118199   61500 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 00:59:58.118216   61500 certs.go:256] generating profile certs ...
	I0416 00:59:58.118351   61500 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/client.key
	I0416 00:59:58.118446   61500 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/apiserver.key.a3b1330f
	I0416 00:59:58.118505   61500 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/proxy-client.key
	I0416 00:59:58.118664   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 00:59:58.118708   61500 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 00:59:58.118721   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 00:59:58.118756   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 00:59:58.118786   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 00:59:58.118814   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 00:59:58.118874   61500 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 00:59:58.119738   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 00:59:58.150797   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 00:59:58.181693   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 00:59:58.231332   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 00:59:58.276528   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0416 00:59:58.301000   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 00:59:58.326090   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 00:59:58.350254   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 00:59:58.377597   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 00:59:58.401548   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 00:59:58.425237   61500 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 00:59:58.449748   61500 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 00:59:58.468346   61500 ssh_runner.go:195] Run: openssl version
	I0416 00:59:58.474164   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 00:59:58.485674   61500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 00:59:58.490136   61500 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 00:59:58.490203   61500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 00:59:58.495781   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 00:59:58.507047   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 00:59:58.518007   61500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 00:59:58.522317   61500 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 00:59:58.522364   61500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 00:59:58.527809   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 00:59:58.538579   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 00:59:58.549188   61500 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:59:58.553688   61500 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:59:58.553732   61500 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 00:59:58.559175   61500 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 00:59:58.570142   61500 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 00:59:58.574657   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 00:59:58.580560   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 00:59:58.586319   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 00:59:58.593938   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 00:59:58.599808   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 00:59:58.605583   61500 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 00:59:58.611301   61500 kubeadm.go:391] StartCluster: {Name:no-preload-572602 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.0-rc.2 ClusterName:no-preload-572602 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 00:59:58.611385   61500 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 00:59:58.611439   61500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:59:58.655244   61500 cri.go:89] found id: ""
	I0416 00:59:58.655315   61500 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 00:59:58.667067   61500 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 00:59:58.667082   61500 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 00:59:58.667088   61500 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 00:59:58.667128   61500 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 00:59:58.678615   61500 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:59:58.680097   61500 kubeconfig.go:125] found "no-preload-572602" server: "https://192.168.39.121:8443"
	I0416 00:59:58.683135   61500 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 00:59:58.695291   61500 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.121
	I0416 00:59:58.695323   61500 kubeadm.go:1154] stopping kube-system containers ...
	I0416 00:59:58.695337   61500 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 00:59:58.695380   61500 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 00:59:58.731743   61500 cri.go:89] found id: ""
	I0416 00:59:58.731832   61500 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 00:59:58.748125   61500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 00:59:58.757845   61500 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 00:59:58.757865   61500 kubeadm.go:156] found existing configuration files:
	
	I0416 00:59:58.757918   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 00:59:58.766993   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 00:59:58.767036   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 00:59:58.776831   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 00:59:58.786420   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 00:59:58.786467   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 00:59:58.796067   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 00:59:58.805385   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 00:59:58.805511   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 00:59:58.815313   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 00:59:58.826551   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 00:59:58.826603   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 00:59:58.836652   61500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 00:59:58.848671   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 00:59:58.967511   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.416009   61500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.44846758s)
	I0416 01:00:00.416041   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.657784   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.741694   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:00.876550   61500 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:00.876630   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:01.377586   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:01.877647   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:01.950167   61500 api_server.go:72] duration metric: took 1.073614574s to wait for apiserver process to appear ...
	I0416 01:00:01.950201   61500 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:00:01.950224   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:01.950854   61500 api_server.go:269] stopped: https://192.168.39.121:8443/healthz: Get "https://192.168.39.121:8443/healthz": dial tcp 192.168.39.121:8443: connect: connection refused
	I0416 01:00:02.450437   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 00:59:59.421878   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Start
	I0416 00:59:59.422036   62747 main.go:141] libmachine: (embed-certs-617092) Ensuring networks are active...
	I0416 00:59:59.422646   62747 main.go:141] libmachine: (embed-certs-617092) Ensuring network default is active
	I0416 00:59:59.422931   62747 main.go:141] libmachine: (embed-certs-617092) Ensuring network mk-embed-certs-617092 is active
	I0416 00:59:59.423360   62747 main.go:141] libmachine: (embed-certs-617092) Getting domain xml...
	I0416 00:59:59.424005   62747 main.go:141] libmachine: (embed-certs-617092) Creating domain...
	I0416 01:00:00.682582   62747 main.go:141] libmachine: (embed-certs-617092) Waiting to get IP...
	I0416 01:00:00.683684   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:00.684222   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:00.684277   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:00.684198   63257 retry.go:31] will retry after 196.582767ms: waiting for machine to come up
	I0416 01:00:00.882954   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:00.883544   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:00.883577   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:00.883482   63257 retry.go:31] will retry after 309.274692ms: waiting for machine to come up
	I0416 01:00:01.193848   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:01.194286   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:01.194325   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:01.194234   63257 retry.go:31] will retry after 379.332728ms: waiting for machine to come up
	I0416 01:00:01.574938   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:01.575371   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:01.575400   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:01.575318   63257 retry.go:31] will retry after 445.10423ms: waiting for machine to come up
	I0416 01:00:02.022081   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:02.022612   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:02.022636   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:02.022570   63257 retry.go:31] will retry after 692.025501ms: waiting for machine to come up
	I0416 01:00:02.716548   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:02.717032   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:02.717061   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:02.716992   63257 retry.go:31] will retry after 735.44304ms: waiting for machine to come up
	I0416 01:00:02.891638   62139 crio.go:462] duration metric: took 1.823700483s to copy over tarball
	I0416 01:00:02.891723   62139 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:00:06.137253   62139 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.245498092s)
	I0416 01:00:06.137283   62139 crio.go:469] duration metric: took 3.245614896s to extract the tarball
	I0416 01:00:06.137292   62139 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:00:06.181260   62139 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:06.224646   62139 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0416 01:00:06.224682   62139 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0416 01:00:06.224762   62139 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:06.224815   62139 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.224851   62139 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.224821   62139 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.224768   62139 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.224797   62139 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.225121   62139 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0416 01:00:06.224797   62139 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.226485   62139 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.226505   62139 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0416 01:00:06.226516   62139 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.226580   62139 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.226729   62139 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.227296   62139 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:06.227311   62139 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.227315   62139 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.397101   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.431142   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0416 01:00:06.433152   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.433876   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.434844   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.441478   62139 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0416 01:00:06.441524   62139 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.441558   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.450391   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.506375   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.540080   62139 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0416 01:00:06.540250   62139 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0416 01:00:06.540121   62139 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0416 01:00:06.540299   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.540305   62139 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.540343   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613287   62139 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0416 01:00:06.613305   62139 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0416 01:00:06.613334   62139 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.613339   62139 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.613381   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613381   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613490   62139 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0416 01:00:06.613522   62139 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.613569   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613384   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0416 01:00:06.613620   62139 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0416 01:00:06.613657   62139 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.613716   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0416 01:00:06.613722   62139 ssh_runner.go:195] Run: which crictl
	I0416 01:00:06.613665   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0416 01:00:06.619153   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0416 01:00:06.638065   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0416 01:00:06.734018   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0416 01:00:06.734134   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0416 01:00:06.749273   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0416 01:00:06.750536   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0416 01:00:06.750576   62139 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0416 01:00:06.750655   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0416 01:00:06.750594   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0416 01:00:06.790321   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0416 01:00:06.803564   62139 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0416 01:00:07.060494   62139 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:00:05.541219   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:05.541261   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:05.541279   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:05.585252   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:05.585284   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:05.950871   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:05.970682   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:05.970725   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:06.450780   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:06.457855   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:06.457888   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:06.950519   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:06.955476   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:06.955505   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:07.451155   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:07.463138   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:07.463172   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:03.453566   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:03.454098   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:03.454131   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:03.454033   63257 retry.go:31] will retry after 838.732671ms: waiting for machine to come up
	I0416 01:00:04.294692   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:04.295209   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:04.295237   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:04.295158   63257 retry.go:31] will retry after 1.302969512s: waiting for machine to come up
	I0416 01:00:05.599886   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:05.600406   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:05.600435   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:05.600378   63257 retry.go:31] will retry after 1.199501225s: waiting for machine to come up
	I0416 01:00:06.801741   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:06.802134   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:06.802153   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:06.802107   63257 retry.go:31] will retry after 1.631018672s: waiting for machine to come up
	I0416 01:00:07.951263   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:07.961911   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:07.961946   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:08.450413   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:08.458651   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0416 01:00:08.458683   61500 api_server.go:103] status: https://192.168.39.121:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0416 01:00:08.950297   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:00:08.955847   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 200:
	ok
	I0416 01:00:08.964393   61500 api_server.go:141] control plane version: v1.30.0-rc.2
	I0416 01:00:08.964422   61500 api_server.go:131] duration metric: took 7.01421218s to wait for apiserver health ...
	I0416 01:00:08.964432   61500 cni.go:84] Creating CNI manager for ""
	I0416 01:00:08.964445   61500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:08.966249   61500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:00:07.207951   62139 cache_images.go:92] duration metric: took 983.249797ms to LoadCachedImages
	W0416 01:00:07.286619   62139 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18647-7542/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0416 01:00:07.286654   62139 kubeadm.go:928] updating node { 192.168.83.98 8443 v1.20.0 crio true true} ...
	I0416 01:00:07.286815   62139 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-800769 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.83.98
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 01:00:07.286916   62139 ssh_runner.go:195] Run: crio config
	I0416 01:00:07.338016   62139 cni.go:84] Creating CNI manager for ""
	I0416 01:00:07.338038   62139 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:07.338049   62139 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:00:07.338072   62139 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.83.98 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-800769 NodeName:old-k8s-version-800769 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.83.98"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.83.98 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0416 01:00:07.338207   62139 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.83.98
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-800769"
	  kubeletExtraArgs:
	    node-ip: 192.168.83.98
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.83.98"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:00:07.338273   62139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0416 01:00:07.349347   62139 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:00:07.349432   62139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:00:07.361389   62139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0416 01:00:07.379714   62139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:00:07.397953   62139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0416 01:00:07.416901   62139 ssh_runner.go:195] Run: grep 192.168.83.98	control-plane.minikube.internal$ /etc/hosts
	I0416 01:00:07.420904   62139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.83.98	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:07.436685   62139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:07.567945   62139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:00:07.587829   62139 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769 for IP: 192.168.83.98
	I0416 01:00:07.587858   62139 certs.go:194] generating shared ca certs ...
	I0416 01:00:07.587880   62139 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:07.588087   62139 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:00:07.588155   62139 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:00:07.588171   62139 certs.go:256] generating profile certs ...
	I0416 01:00:07.606683   62139 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.key
	I0416 01:00:07.606823   62139 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key.efc35655
	I0416 01:00:07.606872   62139 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.key
	I0416 01:00:07.607040   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:00:07.607087   62139 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:00:07.607114   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:00:07.607172   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:00:07.607204   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:00:07.607234   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:00:07.607283   62139 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:07.608127   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:00:07.658868   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:00:07.703378   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:00:07.743203   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:00:07.787335   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0416 01:00:07.823630   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 01:00:07.854198   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:00:07.881813   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0416 01:00:07.909698   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:00:07.935341   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:00:07.963102   62139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:00:07.989657   62139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:00:08.009203   62139 ssh_runner.go:195] Run: openssl version
	I0416 01:00:08.015677   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:00:08.027077   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.032096   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.032179   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:08.038672   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:00:08.054256   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:00:08.065287   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.069846   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.069907   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:00:08.075899   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:00:08.087272   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:00:08.098494   62139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.103168   62139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.103246   62139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:00:08.109202   62139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:00:08.120143   62139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:00:08.125027   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 01:00:08.131716   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 01:00:08.138024   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 01:00:08.144291   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 01:00:08.150741   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 01:00:08.156931   62139 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 01:00:08.163147   62139 kubeadm.go:391] StartCluster: {Name:old-k8s-version-800769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-800769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.83.98 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:00:08.163254   62139 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:00:08.163298   62139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:08.201923   62139 cri.go:89] found id: ""
	I0416 01:00:08.202000   62139 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 01:00:08.212441   62139 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 01:00:08.212462   62139 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 01:00:08.212467   62139 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 01:00:08.212514   62139 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 01:00:08.222702   62139 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 01:00:08.223670   62139 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-800769" does not appear in /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:00:08.224332   62139 kubeconfig.go:62] /home/jenkins/minikube-integration/18647-7542/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-800769" cluster setting kubeconfig missing "old-k8s-version-800769" context setting]
	I0416 01:00:08.225340   62139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:08.343775   62139 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 01:00:08.355942   62139 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.83.98
	I0416 01:00:08.355986   62139 kubeadm.go:1154] stopping kube-system containers ...
	I0416 01:00:08.356007   62139 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 01:00:08.356081   62139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:08.398894   62139 cri.go:89] found id: ""
	I0416 01:00:08.398976   62139 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 01:00:08.416343   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:00:08.426901   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:00:08.426926   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:00:08.426981   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:00:08.437870   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:00:08.437942   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:00:08.452256   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:00:08.466375   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:00:08.466447   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:00:08.477246   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:00:08.487547   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:00:08.487615   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:00:08.504171   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:00:08.515265   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:00:08.515332   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:00:08.525186   62139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:00:08.535381   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:08.657456   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.504421   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.781478   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.950913   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:10.044772   62139 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:10.044871   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:10.545002   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:11.045664   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:11.545083   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:12.045593   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:08.967643   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:00:08.986743   61500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:00:09.011229   61500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:00:09.022810   61500 system_pods.go:59] 8 kube-system pods found
	I0416 01:00:09.022858   61500 system_pods.go:61] "coredns-7db6d8ff4d-xxlkb" [b1ec79ef-e16c-4feb-94ec-5dc85645867f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:00:09.022869   61500 system_pods.go:61] "etcd-no-preload-572602" [f29f3efe-bee4-4d8c-9d49-68008ad50a9d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 01:00:09.022881   61500 system_pods.go:61] "kube-apiserver-no-preload-572602" [dd740f94-bfd5-4043-9522-5b8a932690cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 01:00:09.022893   61500 system_pods.go:61] "kube-controller-manager-no-preload-572602" [2778e1a7-a7e3-4ad6-a265-552e78b6b195] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 01:00:09.022901   61500 system_pods.go:61] "kube-proxy-v9fmp" [70ab6236-c758-48eb-85a7-8f7721730a20] Running
	I0416 01:00:09.022908   61500 system_pods.go:61] "kube-scheduler-no-preload-572602" [bb8650bb-657e-49f1-9cee-4437879be44d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 01:00:09.022919   61500 system_pods.go:61] "metrics-server-569cc877fc-llsfr" [ad421803-6236-44df-a15d-c890a3a10dff] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:00:09.022925   61500 system_pods.go:61] "storage-provisioner" [ec2dd6e2-33db-4888-8945-9879821c92fc] Running
	I0416 01:00:09.022934   61500 system_pods.go:74] duration metric: took 11.661356ms to wait for pod list to return data ...
	I0416 01:00:09.022950   61500 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:00:09.027411   61500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:00:09.027445   61500 node_conditions.go:123] node cpu capacity is 2
	I0416 01:00:09.027459   61500 node_conditions.go:105] duration metric: took 4.503043ms to run NodePressure ...
	I0416 01:00:09.027480   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:09.307796   61500 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 01:00:09.313534   61500 kubeadm.go:733] kubelet initialised
	I0416 01:00:09.313567   61500 kubeadm.go:734] duration metric: took 5.734401ms waiting for restarted kubelet to initialise ...
	I0416 01:00:09.313580   61500 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:00:09.320900   61500 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.327569   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.327606   61500 pod_ready.go:81] duration metric: took 6.67541ms for pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.327621   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "coredns-7db6d8ff4d-xxlkb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.327633   61500 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.333714   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "etcd-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.333746   61500 pod_ready.go:81] duration metric: took 6.094825ms for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.333759   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "etcd-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.333768   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.338980   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "kube-apiserver-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.339006   61500 pod_ready.go:81] duration metric: took 5.230122ms for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.339017   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "kube-apiserver-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.339033   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.415418   61500 pod_ready.go:97] node "no-preload-572602" hosting pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.415450   61500 pod_ready.go:81] duration metric: took 76.40508ms for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:09.415462   61500 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-572602" hosting pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-572602" has status "Ready":"False"
	I0416 01:00:09.415470   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v9fmp" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.815907   61500 pod_ready.go:92] pod "kube-proxy-v9fmp" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:09.815945   61500 pod_ready.go:81] duration metric: took 400.462786ms for pod "kube-proxy-v9fmp" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:09.815959   61500 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:11.824269   61500 pod_ready.go:102] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:08.434523   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:08.435039   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:08.435067   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:08.434988   63257 retry.go:31] will retry after 2.819136125s: waiting for machine to come up
	I0416 01:00:11.256238   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:11.256704   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:11.256722   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:11.256664   63257 retry.go:31] will retry after 3.074881299s: waiting for machine to come up
	I0416 01:00:12.545696   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:13.045935   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:13.545810   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.045682   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.545524   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:15.045110   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:15.545792   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:16.045843   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:16.545684   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:17.045401   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:14.322436   61500 pod_ready.go:102] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:16.821648   61500 pod_ready.go:102] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:14.335004   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:14.335391   62747 main.go:141] libmachine: (embed-certs-617092) DBG | unable to find current IP address of domain embed-certs-617092 in network mk-embed-certs-617092
	I0416 01:00:14.335437   62747 main.go:141] libmachine: (embed-certs-617092) DBG | I0416 01:00:14.335343   63257 retry.go:31] will retry after 4.248377683s: waiting for machine to come up
	I0416 01:00:20.014452   61267 start.go:364] duration metric: took 53.932663013s to acquireMachinesLock for "default-k8s-diff-port-653942"
	I0416 01:00:20.014507   61267 start.go:96] Skipping create...Using existing machine configuration
	I0416 01:00:20.014515   61267 fix.go:54] fixHost starting: 
	I0416 01:00:20.014929   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:00:20.014964   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:00:20.033099   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42949
	I0416 01:00:20.033554   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:00:20.034077   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:00:20.034104   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:00:20.034458   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:00:20.034665   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:20.034812   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:00:20.036559   61267 fix.go:112] recreateIfNeeded on default-k8s-diff-port-653942: state=Stopped err=<nil>
	I0416 01:00:20.036588   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	W0416 01:00:20.036751   61267 fix.go:138] unexpected machine state, will restart: <nil>
	I0416 01:00:20.038774   61267 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-653942" ...
	I0416 01:00:18.588875   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.589320   62747 main.go:141] libmachine: (embed-certs-617092) Found IP for machine: 192.168.61.225
	I0416 01:00:18.589347   62747 main.go:141] libmachine: (embed-certs-617092) Reserving static IP address...
	I0416 01:00:18.589362   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has current primary IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.589699   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "embed-certs-617092", mac: "52:54:00:86:1b:62", ip: "192.168.61.225"} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.589728   62747 main.go:141] libmachine: (embed-certs-617092) Reserved static IP address: 192.168.61.225
	I0416 01:00:18.589752   62747 main.go:141] libmachine: (embed-certs-617092) DBG | skip adding static IP to network mk-embed-certs-617092 - found existing host DHCP lease matching {name: "embed-certs-617092", mac: "52:54:00:86:1b:62", ip: "192.168.61.225"}
	I0416 01:00:18.589771   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Getting to WaitForSSH function...
	I0416 01:00:18.589808   62747 main.go:141] libmachine: (embed-certs-617092) Waiting for SSH to be available...
	I0416 01:00:18.591590   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.591858   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.591885   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.591995   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Using SSH client type: external
	I0416 01:00:18.592027   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa (-rw-------)
	I0416 01:00:18.592058   62747 main.go:141] libmachine: (embed-certs-617092) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.225 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 01:00:18.592072   62747 main.go:141] libmachine: (embed-certs-617092) DBG | About to run SSH command:
	I0416 01:00:18.592084   62747 main.go:141] libmachine: (embed-certs-617092) DBG | exit 0
	I0416 01:00:18.717336   62747 main.go:141] libmachine: (embed-certs-617092) DBG | SSH cmd err, output: <nil>: 
	I0416 01:00:18.717759   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetConfigRaw
	I0416 01:00:18.718347   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:18.720640   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.721040   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.721086   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.721300   62747 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/config.json ...
	I0416 01:00:18.721481   62747 machine.go:94] provisionDockerMachine start ...
	I0416 01:00:18.721501   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:18.721700   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:18.723610   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.723924   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.723946   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.724126   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:18.724345   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.724512   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.724616   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:18.724737   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:18.725049   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:18.725199   62747 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 01:00:18.834014   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 01:00:18.834041   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetMachineName
	I0416 01:00:18.834257   62747 buildroot.go:166] provisioning hostname "embed-certs-617092"
	I0416 01:00:18.834280   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetMachineName
	I0416 01:00:18.834495   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:18.836959   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.837282   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.837333   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.837417   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:18.837588   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.837755   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.837962   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:18.838152   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:18.838324   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:18.838342   62747 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-617092 && echo "embed-certs-617092" | sudo tee /etc/hostname
	I0416 01:00:18.959828   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-617092
	
	I0416 01:00:18.959865   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:18.962661   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.962997   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:18.963029   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:18.963174   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:18.963351   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.963488   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:18.963609   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:18.963747   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:18.963949   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:18.963967   62747 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-617092' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-617092/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-617092' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 01:00:19.079309   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 01:00:19.079341   62747 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 01:00:19.079400   62747 buildroot.go:174] setting up certificates
	I0416 01:00:19.079409   62747 provision.go:84] configureAuth start
	I0416 01:00:19.079423   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetMachineName
	I0416 01:00:19.079723   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:19.082430   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.082809   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.082838   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.082994   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.085476   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.085802   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.085825   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.085952   62747 provision.go:143] copyHostCerts
	I0416 01:00:19.086006   62747 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 01:00:19.086022   62747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 01:00:19.086077   62747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 01:00:19.086165   62747 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 01:00:19.086174   62747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 01:00:19.086193   62747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 01:00:19.086244   62747 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 01:00:19.086251   62747 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 01:00:19.086270   62747 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 01:00:19.086336   62747 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.embed-certs-617092 san=[127.0.0.1 192.168.61.225 embed-certs-617092 localhost minikube]
	I0416 01:00:19.330622   62747 provision.go:177] copyRemoteCerts
	I0416 01:00:19.330687   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 01:00:19.330712   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.333264   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.333618   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.333645   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.333798   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.333979   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.334122   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.334235   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:19.415820   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0416 01:00:19.442985   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0416 01:00:19.468427   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 01:00:19.496640   62747 provision.go:87] duration metric: took 417.215523ms to configureAuth
	I0416 01:00:19.496676   62747 buildroot.go:189] setting minikube options for container-runtime
	I0416 01:00:19.496857   62747 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:00:19.496929   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.499561   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.499933   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.499981   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.500132   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.500352   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.500529   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.500671   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.500823   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:19.501026   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:19.501046   62747 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 01:00:19.775400   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 01:00:19.775434   62747 machine.go:97] duration metric: took 1.053938445s to provisionDockerMachine
	I0416 01:00:19.775448   62747 start.go:293] postStartSetup for "embed-certs-617092" (driver="kvm2")
	I0416 01:00:19.775462   62747 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 01:00:19.775484   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:19.775853   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 01:00:19.775886   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.778961   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.779327   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.779356   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.779510   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.779723   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.779883   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.780008   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:19.865236   62747 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 01:00:19.869769   62747 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 01:00:19.869800   62747 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 01:00:19.869865   62747 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 01:00:19.870010   62747 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 01:00:19.870111   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 01:00:19.880477   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:19.905555   62747 start.go:296] duration metric: took 130.091868ms for postStartSetup
	I0416 01:00:19.905603   62747 fix.go:56] duration metric: took 20.511199999s for fixHost
	I0416 01:00:19.905629   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:19.908252   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.908593   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:19.908631   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:19.908770   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:19.908972   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.909129   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:19.909284   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:19.909448   62747 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:19.909607   62747 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.61.225 22 <nil> <nil>}
	I0416 01:00:19.909622   62747 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 01:00:20.014222   62747 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229219.981820926
	
	I0416 01:00:20.014251   62747 fix.go:216] guest clock: 1713229219.981820926
	I0416 01:00:20.014262   62747 fix.go:229] Guest: 2024-04-16 01:00:19.981820926 +0000 UTC Remote: 2024-04-16 01:00:19.90560817 +0000 UTC m=+97.152894999 (delta=76.212756ms)
	I0416 01:00:20.014331   62747 fix.go:200] guest clock delta is within tolerance: 76.212756ms
	I0416 01:00:20.014339   62747 start.go:83] releasing machines lock for "embed-certs-617092", held for 20.619971021s
	I0416 01:00:20.014377   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.014676   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:20.017771   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.018204   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:20.018236   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.018446   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.018991   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.019172   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:00:20.019260   62747 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 01:00:20.019299   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:20.019439   62747 ssh_runner.go:195] Run: cat /version.json
	I0416 01:00:20.019466   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:00:20.022283   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.022554   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.022664   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:20.022688   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.022897   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:20.023088   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:20.023150   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:20.023177   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:20.023281   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:20.023431   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:00:20.023431   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:20.023791   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:00:20.023942   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:00:20.024084   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:00:20.138251   62747 ssh_runner.go:195] Run: systemctl --version
	I0416 01:00:20.145100   62747 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 01:00:20.299049   62747 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 01:00:20.307080   62747 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 01:00:20.307177   62747 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 01:00:20.326056   62747 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 01:00:20.326085   62747 start.go:494] detecting cgroup driver to use...
	I0416 01:00:20.326166   62747 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 01:00:20.343297   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 01:00:20.358136   62747 docker.go:217] disabling cri-docker service (if available) ...
	I0416 01:00:20.358201   62747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 01:00:20.372936   62747 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 01:00:20.387473   62747 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 01:00:20.515721   62747 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:00:20.680319   62747 docker.go:233] disabling docker service ...
	I0416 01:00:20.680413   62747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:00:20.700816   62747 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:00:20.724097   62747 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:00:20.885812   62747 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:00:21.037890   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:00:21.055670   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:00:21.078466   62747 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 01:00:21.078533   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.090135   62747 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:00:21.090200   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.106122   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.123844   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.134923   62747 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:00:21.153565   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.164751   62747 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.184880   62747 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:21.197711   62747 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:00:21.208615   62747 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:00:21.208669   62747 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:00:21.223906   62747 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:00:21.234873   62747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:21.405921   62747 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:00:21.564833   62747 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:00:21.564918   62747 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:00:21.570592   62747 start.go:562] Will wait 60s for crictl version
	I0416 01:00:21.570660   62747 ssh_runner.go:195] Run: which crictl
	I0416 01:00:21.575339   62747 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:00:21.617252   62747 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:00:21.617348   62747 ssh_runner.go:195] Run: crio --version
	I0416 01:00:21.648662   62747 ssh_runner.go:195] Run: crio --version
	I0416 01:00:21.683775   62747 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 01:00:17.544937   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:18.045282   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:18.545707   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:19.045821   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:19.545868   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.045069   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.545134   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:21.045607   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:21.545366   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:22.044998   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:20.040137   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Start
	I0416 01:00:20.040355   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Ensuring networks are active...
	I0416 01:00:20.041103   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Ensuring network default is active
	I0416 01:00:20.041469   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Ensuring network mk-default-k8s-diff-port-653942 is active
	I0416 01:00:20.041869   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Getting domain xml...
	I0416 01:00:20.042474   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Creating domain...
	I0416 01:00:21.359375   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting to get IP...
	I0416 01:00:21.360333   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.360736   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.360807   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:21.360726   63461 retry.go:31] will retry after 290.970715ms: waiting for machine to come up
	I0416 01:00:21.653420   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.653883   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:21.653916   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:21.653841   63461 retry.go:31] will retry after 361.304618ms: waiting for machine to come up
	I0416 01:00:22.016540   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.017038   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.017071   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:22.016976   63461 retry.go:31] will retry after 411.249327ms: waiting for machine to come up
	I0416 01:00:18.322778   61500 pod_ready.go:92] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:18.322799   61500 pod_ready.go:81] duration metric: took 8.506833323s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:18.322808   61500 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:20.328344   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:22.331157   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:21.685033   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetIP
	I0416 01:00:21.688407   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:21.688774   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:00:21.688809   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:00:21.689010   62747 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0416 01:00:21.693612   62747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:21.707524   62747 kubeadm.go:877] updating cluster {Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:00:21.707657   62747 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 01:00:21.707699   62747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:21.748697   62747 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 01:00:21.748785   62747 ssh_runner.go:195] Run: which lz4
	I0416 01:00:21.753521   62747 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:00:21.758125   62747 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:00:21.758158   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 01:00:22.545403   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:23.045303   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:23.544984   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:24.045882   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:24.545194   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:25.045010   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:25.545278   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:26.045702   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:26.545233   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:27.045814   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:22.429595   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.430124   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.430159   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:22.430087   63461 retry.go:31] will retry after 495.681984ms: waiting for machine to come up
	I0416 01:00:22.927476   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.927932   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:22.927959   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:22.927875   63461 retry.go:31] will retry after 506.264557ms: waiting for machine to come up
	I0416 01:00:23.435290   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:23.435742   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:23.435773   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:23.435689   63461 retry.go:31] will retry after 826.359716ms: waiting for machine to come up
	I0416 01:00:24.263672   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:24.264151   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:24.264183   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:24.264107   63461 retry.go:31] will retry after 873.35176ms: waiting for machine to come up
	I0416 01:00:25.138864   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:25.139318   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:25.139340   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:25.139308   63461 retry.go:31] will retry after 1.129546887s: waiting for machine to come up
	I0416 01:00:26.270364   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:26.270968   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:26.271000   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:26.270902   63461 retry.go:31] will retry after 1.441466368s: waiting for machine to come up
	I0416 01:00:24.830562   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:26.832057   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:23.353811   62747 crio.go:462] duration metric: took 1.600325005s to copy over tarball
	I0416 01:00:23.353885   62747 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:00:25.815443   62747 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.46152973s)
	I0416 01:00:25.815479   62747 crio.go:469] duration metric: took 2.461639439s to extract the tarball
	I0416 01:00:25.815489   62747 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:00:25.862653   62747 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:25.914416   62747 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 01:00:25.914444   62747 cache_images.go:84] Images are preloaded, skipping loading
	I0416 01:00:25.914454   62747 kubeadm.go:928] updating node { 192.168.61.225 8443 v1.29.3 crio true true} ...
	I0416 01:00:25.914586   62747 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-617092 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.225
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 01:00:25.914680   62747 ssh_runner.go:195] Run: crio config
	I0416 01:00:25.970736   62747 cni.go:84] Creating CNI manager for ""
	I0416 01:00:25.970760   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:25.970773   62747 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:00:25.970796   62747 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.225 APIServerPort:8443 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-617092 NodeName:embed-certs-617092 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.225"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.225 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 01:00:25.970949   62747 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.225
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-617092"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.225
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.225"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:00:25.971022   62747 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 01:00:25.985111   62747 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:00:25.985198   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:00:25.996306   62747 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0416 01:00:26.013401   62747 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:00:26.030094   62747 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0416 01:00:26.048252   62747 ssh_runner.go:195] Run: grep 192.168.61.225	control-plane.minikube.internal$ /etc/hosts
	I0416 01:00:26.052717   62747 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.225	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:26.069538   62747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:26.205867   62747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:00:26.224210   62747 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092 for IP: 192.168.61.225
	I0416 01:00:26.224237   62747 certs.go:194] generating shared ca certs ...
	I0416 01:00:26.224259   62747 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:26.224459   62747 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:00:26.224520   62747 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:00:26.224532   62747 certs.go:256] generating profile certs ...
	I0416 01:00:26.224646   62747 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/client.key
	I0416 01:00:26.224723   62747 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/apiserver.key.383097d4
	I0416 01:00:26.224773   62747 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/proxy-client.key
	I0416 01:00:26.224932   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:00:26.224973   62747 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:00:26.224982   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:00:26.225014   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:00:26.225050   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:00:26.225085   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:00:26.225126   62747 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:26.225872   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:00:26.282272   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:00:26.329827   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:00:26.366744   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:00:26.405845   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0416 01:00:26.440535   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0416 01:00:26.465371   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:00:26.491633   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/embed-certs-617092/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 01:00:26.518682   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:00:26.543992   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:00:26.573728   62747 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:00:26.602308   62747 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:00:26.622491   62747 ssh_runner.go:195] Run: openssl version
	I0416 01:00:26.628805   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:00:26.643163   62747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:26.648292   62747 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:26.648351   62747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:26.654890   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:00:26.668501   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:00:26.682038   62747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:00:26.687327   62747 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:00:26.687388   62747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:00:26.693557   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:00:26.706161   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:00:26.718432   62747 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:00:26.722989   62747 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:00:26.723050   62747 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:00:26.729311   62747 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:00:26.744138   62747 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:00:26.749490   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 01:00:26.756478   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 01:00:26.763326   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 01:00:26.770194   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 01:00:26.776641   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 01:00:26.783022   62747 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 01:00:26.789543   62747 kubeadm.go:391] StartCluster: {Name:embed-certs-617092 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.3 ClusterName:embed-certs-617092 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:00:26.789654   62747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:00:26.789717   62747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:26.831148   62747 cri.go:89] found id: ""
	I0416 01:00:26.831219   62747 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 01:00:26.844372   62747 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 01:00:26.844398   62747 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 01:00:26.844403   62747 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 01:00:26.844454   62747 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 01:00:26.858173   62747 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 01:00:26.859210   62747 kubeconfig.go:125] found "embed-certs-617092" server: "https://192.168.61.225:8443"
	I0416 01:00:26.861233   62747 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 01:00:26.874068   62747 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.225
	I0416 01:00:26.874105   62747 kubeadm.go:1154] stopping kube-system containers ...
	I0416 01:00:26.874119   62747 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 01:00:26.874177   62747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:26.926456   62747 cri.go:89] found id: ""
	I0416 01:00:26.926537   62747 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 01:00:26.945874   62747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:00:26.960207   62747 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:00:26.960229   62747 kubeadm.go:156] found existing configuration files:
	
	I0416 01:00:26.960282   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:00:26.971895   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:00:26.971958   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:00:26.982956   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:00:26.993935   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:00:26.994000   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:00:27.005216   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:00:27.015624   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:00:27.015680   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:00:27.026513   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:00:27.037062   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:00:27.037118   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:00:27.048173   62747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:00:27.061987   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:27.190243   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:27.545025   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:28.045752   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:28.545833   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.045264   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.545316   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:30.045594   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:30.545046   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:31.045139   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:31.545251   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:32.045710   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:27.714372   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:27.714822   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:27.714854   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:27.714767   63461 retry.go:31] will retry after 1.810511131s: waiting for machine to come up
	I0416 01:00:29.527497   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:29.528041   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:29.528072   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:29.527983   63461 retry.go:31] will retry after 2.163921338s: waiting for machine to come up
	I0416 01:00:31.694203   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:31.694741   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:31.694769   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:31.694714   63461 retry.go:31] will retry after 2.245150923s: waiting for machine to come up
	I0416 01:00:29.332159   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:31.332218   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:28.252295   62747 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.062013928s)
	I0416 01:00:28.252331   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:28.468110   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:28.553370   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:28.676185   62747 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:28.676273   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.176826   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.676498   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:29.702138   62747 api_server.go:72] duration metric: took 1.025950998s to wait for apiserver process to appear ...
	I0416 01:00:29.702170   62747 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:00:29.702192   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:29.702822   62747 api_server.go:269] stopped: https://192.168.61.225:8443/healthz: Get "https://192.168.61.225:8443/healthz": dial tcp 192.168.61.225:8443: connect: connection refused
	I0416 01:00:30.203298   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:32.951714   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:32.951754   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:32.951779   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:33.003631   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:33.003672   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:33.202825   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:33.208168   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:33.208201   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:33.702532   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:33.712501   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:33.712542   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:34.203157   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:34.210567   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:34.210597   62747 api_server.go:103] status: https://192.168.61.225:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:34.702568   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:00:34.711690   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 200:
	ok
	I0416 01:00:34.723252   62747 api_server.go:141] control plane version: v1.29.3
	I0416 01:00:34.723279   62747 api_server.go:131] duration metric: took 5.021102658s to wait for apiserver health ...
	I0416 01:00:34.723287   62747 cni.go:84] Creating CNI manager for ""
	I0416 01:00:34.723293   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:34.724989   62747 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:00:32.545963   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.045020   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.545657   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:34.045706   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:34.544972   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:35.045252   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:35.545087   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:36.045080   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:36.545787   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:37.045046   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:33.942412   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:33.942923   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | unable to find current IP address of domain default-k8s-diff-port-653942 in network mk-default-k8s-diff-port-653942
	I0416 01:00:33.942952   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | I0416 01:00:33.942870   63461 retry.go:31] will retry after 3.750613392s: waiting for machine to come up
	I0416 01:00:33.829307   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:35.830613   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:34.726400   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:00:34.746294   62747 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:00:34.767028   62747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:00:34.778610   62747 system_pods.go:59] 8 kube-system pods found
	I0416 01:00:34.778653   62747 system_pods.go:61] "coredns-76f75df574-dxzhk" [a71b29ec-8602-47d6-825c-a1a54a1758d0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:00:34.778664   62747 system_pods.go:61] "etcd-embed-certs-617092" [8966501b-6a06-4e0b-acb6-77df5f53cd3d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 01:00:34.778674   62747 system_pods.go:61] "kube-apiserver-embed-certs-617092" [7ad29687-3964-4a5b-8939-bcf3dc71d578] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 01:00:34.778685   62747 system_pods.go:61] "kube-controller-manager-embed-certs-617092" [78b21361-f302-43f3-8356-ea15fad4edb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 01:00:34.778695   62747 system_pods.go:61] "kube-proxy-xtdf4" [4e8fe1da-9a02-428e-94f1-595f2e9170e0] Running
	I0416 01:00:34.778703   62747 system_pods.go:61] "kube-scheduler-embed-certs-617092" [c03d87b4-26d3-4bff-8f53-8844260f1ed8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 01:00:34.778720   62747 system_pods.go:61] "metrics-server-57f55c9bc5-knnvn" [4607d12d-25db-4637-be17-e2665970c0a4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:00:34.778729   62747 system_pods.go:61] "storage-provisioner" [41362b6c-fde7-45fa-b6cf-1d7acef3d4ce] Running
	I0416 01:00:34.778741   62747 system_pods.go:74] duration metric: took 11.690083ms to wait for pod list to return data ...
	I0416 01:00:34.778755   62747 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:00:34.782283   62747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:00:34.782319   62747 node_conditions.go:123] node cpu capacity is 2
	I0416 01:00:34.782329   62747 node_conditions.go:105] duration metric: took 3.566074ms to run NodePressure ...
	I0416 01:00:34.782344   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:35.056194   62747 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 01:00:35.068546   62747 kubeadm.go:733] kubelet initialised
	I0416 01:00:35.068571   62747 kubeadm.go:734] duration metric: took 12.345347ms waiting for restarted kubelet to initialise ...
	I0416 01:00:35.068581   62747 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:00:35.075013   62747 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-dxzhk" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:37.081976   62747 pod_ready.go:102] pod "coredns-76f75df574-dxzhk" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:37.697323   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.697830   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has current primary IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.697857   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Found IP for machine: 192.168.50.216
	I0416 01:00:37.697873   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Reserving static IP address...
	I0416 01:00:37.698323   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Reserved static IP address: 192.168.50.216
	I0416 01:00:37.698345   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Waiting for SSH to be available...
	I0416 01:00:37.698372   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-653942", mac: "52:54:00:4b:a2:47", ip: "192.168.50.216"} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.698418   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | skip adding static IP to network mk-default-k8s-diff-port-653942 - found existing host DHCP lease matching {name: "default-k8s-diff-port-653942", mac: "52:54:00:4b:a2:47", ip: "192.168.50.216"}
	I0416 01:00:37.698450   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Getting to WaitForSSH function...
	I0416 01:00:37.700942   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.701312   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.701346   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.701520   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Using SSH client type: external
	I0416 01:00:37.701567   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Using SSH private key: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa (-rw-------)
	I0416 01:00:37.701621   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0416 01:00:37.701676   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | About to run SSH command:
	I0416 01:00:37.701712   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | exit 0
	I0416 01:00:37.829860   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | SSH cmd err, output: <nil>: 
	I0416 01:00:37.830254   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetConfigRaw
	I0416 01:00:37.830931   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:37.833361   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.833755   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.833788   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.834026   61267 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/config.json ...
	I0416 01:00:37.834198   61267 machine.go:94] provisionDockerMachine start ...
	I0416 01:00:37.834214   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:37.834426   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:37.836809   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.837221   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.837251   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.837377   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:37.837588   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.837737   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.837869   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:37.838023   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:37.838208   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:37.838219   61267 main.go:141] libmachine: About to run SSH command:
	hostname
	I0416 01:00:37.950999   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0416 01:00:37.951031   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 01:00:37.951271   61267 buildroot.go:166] provisioning hostname "default-k8s-diff-port-653942"
	I0416 01:00:37.951303   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 01:00:37.951483   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:37.954395   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.954730   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:37.954755   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:37.954949   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:37.955165   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.955344   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:37.955549   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:37.955756   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:37.955980   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:37.956001   61267 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-653942 && echo "default-k8s-diff-port-653942" | sudo tee /etc/hostname
	I0416 01:00:38.085650   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-653942
	
	I0416 01:00:38.085682   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.088689   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.089031   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.089060   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.089297   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.089474   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.089623   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.089780   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.089948   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:38.090127   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:38.090146   61267 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-653942' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-653942/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-653942' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0416 01:00:38.214653   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0416 01:00:38.214734   61267 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18647-7542/.minikube CaCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18647-7542/.minikube}
	I0416 01:00:38.214760   61267 buildroot.go:174] setting up certificates
	I0416 01:00:38.214773   61267 provision.go:84] configureAuth start
	I0416 01:00:38.214785   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetMachineName
	I0416 01:00:38.215043   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:38.217744   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.218145   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.218174   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.218336   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.220861   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.221187   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.221216   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.221343   61267 provision.go:143] copyHostCerts
	I0416 01:00:38.221405   61267 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem, removing ...
	I0416 01:00:38.221426   61267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem
	I0416 01:00:38.221492   61267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/ca.pem (1082 bytes)
	I0416 01:00:38.221638   61267 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem, removing ...
	I0416 01:00:38.221649   61267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem
	I0416 01:00:38.221685   61267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/cert.pem (1123 bytes)
	I0416 01:00:38.221777   61267 exec_runner.go:144] found /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem, removing ...
	I0416 01:00:38.221787   61267 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem
	I0416 01:00:38.221815   61267 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18647-7542/.minikube/key.pem (1675 bytes)
	I0416 01:00:38.221887   61267 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-653942 san=[127.0.0.1 192.168.50.216 default-k8s-diff-port-653942 localhost minikube]
	I0416 01:00:38.266327   61267 provision.go:177] copyRemoteCerts
	I0416 01:00:38.266390   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0416 01:00:38.266422   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.269080   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.269546   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.269583   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.269901   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.270115   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.270259   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.270444   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:38.352861   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0416 01:00:38.380995   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0416 01:00:38.405746   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0416 01:00:38.431467   61267 provision.go:87] duration metric: took 216.680985ms to configureAuth
	I0416 01:00:38.431502   61267 buildroot.go:189] setting minikube options for container-runtime
	I0416 01:00:38.431674   61267 config.go:182] Loaded profile config "default-k8s-diff-port-653942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:00:38.431740   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.434444   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.434867   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.434909   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.435032   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.435245   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.435380   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.435568   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.435744   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:38.435948   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:38.435974   61267 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0416 01:00:38.729392   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0416 01:00:38.729421   61267 machine.go:97] duration metric: took 895.211347ms to provisionDockerMachine
	I0416 01:00:38.729432   61267 start.go:293] postStartSetup for "default-k8s-diff-port-653942" (driver="kvm2")
	I0416 01:00:38.729442   61267 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0416 01:00:38.729463   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.729802   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0416 01:00:38.729826   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.732755   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.733135   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.733181   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.733326   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.733490   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.733649   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.733784   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:38.819006   61267 ssh_runner.go:195] Run: cat /etc/os-release
	I0416 01:00:38.823781   61267 info.go:137] Remote host: Buildroot 2023.02.9
	I0416 01:00:38.823804   61267 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/addons for local assets ...
	I0416 01:00:38.823870   61267 filesync.go:126] Scanning /home/jenkins/minikube-integration/18647-7542/.minikube/files for local assets ...
	I0416 01:00:38.823967   61267 filesync.go:149] local asset: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem -> 148972.pem in /etc/ssl/certs
	I0416 01:00:38.824077   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0416 01:00:38.833958   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:38.859934   61267 start.go:296] duration metric: took 130.488205ms for postStartSetup
	I0416 01:00:38.859973   61267 fix.go:56] duration metric: took 18.845458863s for fixHost
	I0416 01:00:38.859992   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.862557   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.862889   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.862927   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.863016   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.863236   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.863426   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.863609   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.863786   61267 main.go:141] libmachine: Using SSH client type: native
	I0416 01:00:38.863951   61267 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4e0] 0x830240 <nil>  [] 0s} 192.168.50.216 22 <nil> <nil>}
	I0416 01:00:38.863961   61267 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0416 01:00:38.970405   61267 main.go:141] libmachine: SSH cmd err, output: <nil>: 1713229238.936521840
	
	I0416 01:00:38.970431   61267 fix.go:216] guest clock: 1713229238.936521840
	I0416 01:00:38.970440   61267 fix.go:229] Guest: 2024-04-16 01:00:38.93652184 +0000 UTC Remote: 2024-04-16 01:00:38.859976379 +0000 UTC m=+356.490123424 (delta=76.545461ms)
	I0416 01:00:38.970489   61267 fix.go:200] guest clock delta is within tolerance: 76.545461ms
	I0416 01:00:38.970496   61267 start.go:83] releasing machines lock for "default-k8s-diff-port-653942", held for 18.956013216s
	I0416 01:00:38.970522   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.970806   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:38.973132   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.973440   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.973455   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.973646   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.974142   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.974332   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:00:38.974388   61267 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0416 01:00:38.974432   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.974532   61267 ssh_runner.go:195] Run: cat /version.json
	I0416 01:00:38.974556   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:00:38.977284   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977459   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977624   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.977653   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977746   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:38.977774   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:38.977800   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.978002   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:00:38.978017   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.978163   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:00:38.978169   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.978296   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:38.978314   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:00:38.978440   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:00:39.090827   61267 ssh_runner.go:195] Run: systemctl --version
	I0416 01:00:39.097716   61267 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0416 01:00:39.249324   61267 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0416 01:00:39.256333   61267 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0416 01:00:39.256402   61267 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0416 01:00:39.272367   61267 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0416 01:00:39.272395   61267 start.go:494] detecting cgroup driver to use...
	I0416 01:00:39.272446   61267 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0416 01:00:39.291713   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0416 01:00:39.305645   61267 docker.go:217] disabling cri-docker service (if available) ...
	I0416 01:00:39.305708   61267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0416 01:00:39.320731   61267 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0416 01:00:39.336917   61267 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0416 01:00:39.450840   61267 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0416 01:00:39.596905   61267 docker.go:233] disabling docker service ...
	I0416 01:00:39.596972   61267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0416 01:00:39.612926   61267 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0416 01:00:39.627583   61267 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0416 01:00:39.778135   61267 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0416 01:00:39.900216   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0416 01:00:39.914697   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0416 01:00:39.935875   61267 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0416 01:00:39.935930   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.946510   61267 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0416 01:00:39.946569   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.956794   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.966968   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:39.977207   61267 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0416 01:00:39.988817   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:40.001088   61267 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:40.018950   61267 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0416 01:00:40.030395   61267 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0416 01:00:40.039956   61267 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0416 01:00:40.040013   61267 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0416 01:00:40.053877   61267 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0416 01:00:40.065292   61267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:40.221527   61267 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0416 01:00:40.382800   61267 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0416 01:00:40.382880   61267 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0416 01:00:40.387842   61267 start.go:562] Will wait 60s for crictl version
	I0416 01:00:40.387897   61267 ssh_runner.go:195] Run: which crictl
	I0416 01:00:40.393774   61267 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0416 01:00:40.435784   61267 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0416 01:00:40.435864   61267 ssh_runner.go:195] Run: crio --version
	I0416 01:00:40.468702   61267 ssh_runner.go:195] Run: crio --version
	I0416 01:00:40.501355   61267 out.go:177] * Preparing Kubernetes v1.29.3 on CRI-O 1.29.1 ...
	I0416 01:00:37.545192   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:38.045346   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:38.545599   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:39.045109   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:39.545360   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.045058   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.545745   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:41.045943   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:41.545900   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:42.045807   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:40.502716   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetIP
	I0416 01:00:40.505958   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:40.506353   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:00:40.506384   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:00:40.506597   61267 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0416 01:00:40.511238   61267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:40.525378   61267 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-653942 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.29.3 ClusterName:default-k8s-diff-port-653942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.216 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0416 01:00:40.525519   61267 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0416 01:00:40.525586   61267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:40.570378   61267 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.3". assuming images are not preloaded.
	I0416 01:00:40.570451   61267 ssh_runner.go:195] Run: which lz4
	I0416 01:00:40.575413   61267 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0416 01:00:40.580583   61267 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0416 01:00:40.580640   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (402967820 bytes)
	I0416 01:00:42.194745   61267 crio.go:462] duration metric: took 1.619375861s to copy over tarball
	I0416 01:00:42.194821   61267 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0416 01:00:37.830710   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:39.831822   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:42.330821   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:39.086761   62747 pod_ready.go:102] pod "coredns-76f75df574-dxzhk" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:40.082847   62747 pod_ready.go:92] pod "coredns-76f75df574-dxzhk" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:40.082868   62747 pod_ready.go:81] duration metric: took 5.007825454s for pod "coredns-76f75df574-dxzhk" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:40.082877   62747 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:42.092402   62747 pod_ready.go:92] pod "etcd-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:42.092425   62747 pod_ready.go:81] duration metric: took 2.009541778s for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:42.092438   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:42.545278   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:43.045894   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:43.545886   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.044964   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.544997   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:45.045340   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:45.545257   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:46.045108   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:46.544994   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:47.045987   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.671272   61267 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.476407392s)
	I0416 01:00:44.671304   61267 crio.go:469] duration metric: took 2.476532286s to extract the tarball
	I0416 01:00:44.671315   61267 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0416 01:00:44.709451   61267 ssh_runner.go:195] Run: sudo crictl images --output json
	I0416 01:00:44.754382   61267 crio.go:514] all images are preloaded for cri-o runtime.
	I0416 01:00:44.754412   61267 cache_images.go:84] Images are preloaded, skipping loading
	I0416 01:00:44.754424   61267 kubeadm.go:928] updating node { 192.168.50.216 8444 v1.29.3 crio true true} ...
	I0416 01:00:44.754543   61267 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-653942 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-653942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0416 01:00:44.754613   61267 ssh_runner.go:195] Run: crio config
	I0416 01:00:44.806896   61267 cni.go:84] Creating CNI manager for ""
	I0416 01:00:44.806918   61267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:44.806926   61267 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0416 01:00:44.806957   61267 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.216 APIServerPort:8444 KubernetesVersion:v1.29.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-653942 NodeName:default-k8s-diff-port-653942 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0416 01:00:44.807089   61267 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.216
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-653942"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.216
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.216"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0416 01:00:44.807144   61267 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.3
	I0416 01:00:44.821347   61267 binaries.go:44] Found k8s binaries, skipping transfer
	I0416 01:00:44.821425   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0416 01:00:44.835415   61267 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0416 01:00:44.855797   61267 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0416 01:00:44.873694   61267 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0416 01:00:44.892535   61267 ssh_runner.go:195] Run: grep 192.168.50.216	control-plane.minikube.internal$ /etc/hosts
	I0416 01:00:44.896538   61267 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0416 01:00:44.909516   61267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:00:45.024588   61267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:00:45.055414   61267 certs.go:68] Setting up /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942 for IP: 192.168.50.216
	I0416 01:00:45.055440   61267 certs.go:194] generating shared ca certs ...
	I0416 01:00:45.055460   61267 certs.go:226] acquiring lock for ca certs: {Name:mkcfa1570e683d94647c63485e1bbb8cf0788316 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:00:45.055622   61267 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key
	I0416 01:00:45.055680   61267 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key
	I0416 01:00:45.055695   61267 certs.go:256] generating profile certs ...
	I0416 01:00:45.055815   61267 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.key
	I0416 01:00:45.055905   61267 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/apiserver.key.6620f6bf
	I0416 01:00:45.055975   61267 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/proxy-client.key
	I0416 01:00:45.056139   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem (1338 bytes)
	W0416 01:00:45.056185   61267 certs.go:480] ignoring /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897_empty.pem, impossibly tiny 0 bytes
	I0416 01:00:45.056195   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca-key.pem (1679 bytes)
	I0416 01:00:45.056234   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/ca.pem (1082 bytes)
	I0416 01:00:45.056268   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/cert.pem (1123 bytes)
	I0416 01:00:45.056295   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/certs/key.pem (1675 bytes)
	I0416 01:00:45.056355   61267 certs.go:484] found cert: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem (1708 bytes)
	I0416 01:00:45.057033   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0416 01:00:45.091704   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0416 01:00:45.154257   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0416 01:00:45.181077   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0416 01:00:45.222401   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0416 01:00:45.248568   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0416 01:00:45.277927   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0416 01:00:45.310417   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0416 01:00:45.341109   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0416 01:00:45.367056   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/certs/14897.pem --> /usr/share/ca-certificates/14897.pem (1338 bytes)
	I0416 01:00:45.395117   61267 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/ssl/certs/148972.pem --> /usr/share/ca-certificates/148972.pem (1708 bytes)
	I0416 01:00:45.421921   61267 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0416 01:00:45.440978   61267 ssh_runner.go:195] Run: openssl version
	I0416 01:00:45.447132   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/148972.pem && ln -fs /usr/share/ca-certificates/148972.pem /etc/ssl/certs/148972.pem"
	I0416 01:00:45.460008   61267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/148972.pem
	I0416 01:00:45.464820   61267 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 15 23:49 /usr/share/ca-certificates/148972.pem
	I0416 01:00:45.464884   61267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/148972.pem
	I0416 01:00:45.471232   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/148972.pem /etc/ssl/certs/3ec20f2e.0"
	I0416 01:00:45.482567   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0416 01:00:45.493541   61267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:45.498792   61267 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 15 23:39 /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:45.498849   61267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0416 01:00:45.505511   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0416 01:00:45.517533   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14897.pem && ln -fs /usr/share/ca-certificates/14897.pem /etc/ssl/certs/14897.pem"
	I0416 01:00:45.529908   61267 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14897.pem
	I0416 01:00:45.535120   61267 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 15 23:49 /usr/share/ca-certificates/14897.pem
	I0416 01:00:45.535181   61267 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14897.pem
	I0416 01:00:45.541232   61267 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/14897.pem /etc/ssl/certs/51391683.0"
	I0416 01:00:45.552946   61267 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0416 01:00:45.559947   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0416 01:00:45.567567   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0416 01:00:45.575204   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0416 01:00:45.582057   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0416 01:00:45.588418   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0416 01:00:45.595517   61267 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0416 01:00:45.602108   61267 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-653942 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.3 ClusterName:default-k8s-diff-port-653942 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.216 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0416 01:00:45.602213   61267 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0416 01:00:45.602256   61267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:45.639538   61267 cri.go:89] found id: ""
	I0416 01:00:45.639621   61267 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0416 01:00:45.651216   61267 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0416 01:00:45.651245   61267 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0416 01:00:45.651252   61267 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0416 01:00:45.651307   61267 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0416 01:00:45.662522   61267 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0416 01:00:45.663697   61267 kubeconfig.go:125] found "default-k8s-diff-port-653942" server: "https://192.168.50.216:8444"
	I0416 01:00:45.666034   61267 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0416 01:00:45.675864   61267 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.216
	I0416 01:00:45.675900   61267 kubeadm.go:1154] stopping kube-system containers ...
	I0416 01:00:45.675927   61267 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0416 01:00:45.675992   61267 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0416 01:00:45.718679   61267 cri.go:89] found id: ""
	I0416 01:00:45.718744   61267 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0416 01:00:45.737326   61267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:00:45.748122   61267 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:00:45.748146   61267 kubeadm.go:156] found existing configuration files:
	
	I0416 01:00:45.748200   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0416 01:00:45.758556   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:00:45.758618   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:00:45.769601   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0416 01:00:45.779361   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:00:45.779424   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:00:45.789283   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0416 01:00:45.798712   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:00:45.798805   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:00:45.808489   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0416 01:00:45.817400   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:00:45.817469   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:00:45.827902   61267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:00:45.838031   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:45.962948   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:46.862340   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:47.092144   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:47.170078   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:47.284634   61267 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:00:47.284719   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:44.830534   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:47.474148   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:44.100441   62747 pod_ready.go:102] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:47.472666   62747 pod_ready.go:102] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:47.599694   62747 pod_ready.go:92] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.599722   62747 pod_ready.go:81] duration metric: took 5.507276982s for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.599734   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.604479   62747 pod_ready.go:92] pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.604496   62747 pod_ready.go:81] duration metric: took 4.755735ms for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.604504   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xtdf4" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.608936   62747 pod_ready.go:92] pod "kube-proxy-xtdf4" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.608951   62747 pod_ready.go:81] duration metric: took 4.441482ms for pod "kube-proxy-xtdf4" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.608959   62747 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.613108   62747 pod_ready.go:92] pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:47.613123   62747 pod_ready.go:81] duration metric: took 4.157722ms for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.613130   62747 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:47.545567   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.045898   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.545631   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:49.045678   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:49.545274   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:50.045281   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:50.545926   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:51.045076   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:51.545303   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:52.045271   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:47.785698   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.284828   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:48.315894   61267 api_server.go:72] duration metric: took 1.031258915s to wait for apiserver process to appear ...
	I0416 01:00:48.315925   61267 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:00:48.315950   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:51.781922   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:51.781957   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:51.781976   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:51.830460   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0416 01:00:51.830491   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0416 01:00:51.830505   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:51.858205   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:51.858240   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:52.316376   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:52.332667   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:52.332700   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:49.829236   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:52.329805   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:49.620626   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:51.620730   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:52.816565   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:52.827158   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0416 01:00:52.827191   61267 api_server.go:103] status: https://192.168.50.216:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0416 01:00:53.316864   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:00:53.321112   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 200:
	ok
	I0416 01:00:53.329289   61267 api_server.go:141] control plane version: v1.29.3
	I0416 01:00:53.329320   61267 api_server.go:131] duration metric: took 5.013387579s to wait for apiserver health ...
	I0416 01:00:53.329331   61267 cni.go:84] Creating CNI manager for ""
	I0416 01:00:53.329340   61267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:00:53.331125   61267 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:00:52.545407   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.044961   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.545290   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:54.044994   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:54.545292   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:55.045285   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:55.545909   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:56.045029   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:56.545343   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:57.044988   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:53.332626   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:00:53.366364   61267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:00:53.401881   61267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:00:53.413478   61267 system_pods.go:59] 8 kube-system pods found
	I0416 01:00:53.413512   61267 system_pods.go:61] "coredns-76f75df574-cvlpq" [c200d470-26dd-40ea-a79b-29d9104122bb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:00:53.413527   61267 system_pods.go:61] "etcd-default-k8s-diff-port-653942" [24e85fc2-fb57-4ef6-9817-846207109e61] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0416 01:00:53.413537   61267 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653942" [bd473e94-72a6-4391-b787-49e16e8a213f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0416 01:00:53.413547   61267 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653942" [31ed7183-a12b-422c-9e67-bba91147347a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0416 01:00:53.413555   61267 system_pods.go:61] "kube-proxy-6q9k7" [ba6d9cf9-37a5-4e01-9489-ce7395fd2a38] Running
	I0416 01:00:53.413563   61267 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653942" [4b481275-4ded-4251-963f-910954f10d15] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0416 01:00:53.413579   61267 system_pods.go:61] "metrics-server-57f55c9bc5-9cnv2" [24905ded-5bf8-4b34-8069-2e65c5ad8f8d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:00:53.413592   61267 system_pods.go:61] "storage-provisioner" [16ba28d0-2031-4c21-9c22-1b9289517449] Running
	I0416 01:00:53.413601   61267 system_pods.go:74] duration metric: took 11.695334ms to wait for pod list to return data ...
	I0416 01:00:53.413613   61267 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:00:53.417579   61267 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:00:53.417609   61267 node_conditions.go:123] node cpu capacity is 2
	I0416 01:00:53.417623   61267 node_conditions.go:105] duration metric: took 4.002735ms to run NodePressure ...
	I0416 01:00:53.417642   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0416 01:00:53.688389   61267 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0416 01:00:53.692755   61267 kubeadm.go:733] kubelet initialised
	I0416 01:00:53.692777   61267 kubeadm.go:734] duration metric: took 4.359298ms waiting for restarted kubelet to initialise ...
	I0416 01:00:53.692784   61267 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:00:53.698521   61267 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-cvlpq" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.704496   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "coredns-76f75df574-cvlpq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.704532   61267 pod_ready.go:81] duration metric: took 5.98382ms for pod "coredns-76f75df574-cvlpq" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.704543   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "coredns-76f75df574-cvlpq" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.704550   61267 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.713110   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.713144   61267 pod_ready.go:81] duration metric: took 8.58568ms for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.713188   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.713201   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.718190   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.718210   61267 pod_ready.go:81] duration metric: took 4.997527ms for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.718219   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.718224   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:53.805697   61267 pod_ready.go:97] node "default-k8s-diff-port-653942" hosting pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.805727   61267 pod_ready.go:81] duration metric: took 87.493805ms for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	E0416 01:00:53.805738   61267 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-653942" hosting pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-653942" has status "Ready":"False"
	I0416 01:00:53.805743   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6q9k7" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:54.205884   61267 pod_ready.go:92] pod "kube-proxy-6q9k7" in "kube-system" namespace has status "Ready":"True"
	I0416 01:00:54.205911   61267 pod_ready.go:81] duration metric: took 400.161115ms for pod "kube-proxy-6q9k7" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:54.205921   61267 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:00:56.213276   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:54.829391   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:57.330218   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:54.119995   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:56.121220   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:57.545333   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.045305   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.545871   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:59.045432   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:59.545000   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:00.045001   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:00.545855   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:01.045812   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:01.545477   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:02.045635   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:00:58.215064   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:00.215192   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:59.330599   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:01.831017   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:00:58.620594   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:01.120516   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:02.545690   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:03.045754   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:03.544965   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:04.045062   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:04.545196   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:05.045986   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:05.545246   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:06.045853   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:06.545863   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:07.045209   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:02.712971   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:04.713437   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:07.212886   61267 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:04.328673   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:06.329726   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:03.124343   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:05.619912   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:07.622044   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:07.544952   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:08.045290   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:08.545296   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:09.045795   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:09.545932   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:10.045124   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:10.045209   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:10.087200   62139 cri.go:89] found id: ""
	I0416 01:01:10.087229   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.087237   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:10.087243   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:10.087300   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:10.126194   62139 cri.go:89] found id: ""
	I0416 01:01:10.126218   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.126225   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:10.126230   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:10.126275   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:10.165238   62139 cri.go:89] found id: ""
	I0416 01:01:10.165271   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.165282   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:10.165290   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:10.165357   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:10.202896   62139 cri.go:89] found id: ""
	I0416 01:01:10.202934   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.202945   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:10.202952   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:10.203015   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:10.243576   62139 cri.go:89] found id: ""
	I0416 01:01:10.243605   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.243613   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:10.243619   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:10.243667   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:10.278637   62139 cri.go:89] found id: ""
	I0416 01:01:10.278661   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.278669   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:10.278674   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:10.278726   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:10.316811   62139 cri.go:89] found id: ""
	I0416 01:01:10.316844   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.316852   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:10.316857   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:10.316914   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:10.359934   62139 cri.go:89] found id: ""
	I0416 01:01:10.359960   62139 logs.go:276] 0 containers: []
	W0416 01:01:10.359967   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:10.359975   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:10.359987   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:10.413082   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:10.413119   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:10.428605   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:10.428632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:10.552536   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:10.552561   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:10.552578   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:10.615054   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:10.615091   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:08.213557   61267 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:01:08.213584   61267 pod_ready.go:81] duration metric: took 14.007657025s for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:01:08.213594   61267 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace to be "Ready" ...
	I0416 01:01:10.224984   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:08.831515   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:11.330529   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:10.122213   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:12.621939   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:13.160749   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:13.178449   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:13.178505   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:13.224192   62139 cri.go:89] found id: ""
	I0416 01:01:13.224215   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.224222   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:13.224228   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:13.224287   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:13.261441   62139 cri.go:89] found id: ""
	I0416 01:01:13.261469   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.261476   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:13.261481   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:13.261545   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:13.296602   62139 cri.go:89] found id: ""
	I0416 01:01:13.296636   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.296647   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:13.296654   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:13.296720   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:13.333944   62139 cri.go:89] found id: ""
	I0416 01:01:13.333968   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.333977   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:13.333984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:13.334049   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:13.372919   62139 cri.go:89] found id: ""
	I0416 01:01:13.372944   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.372957   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:13.372965   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:13.373022   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:13.413257   62139 cri.go:89] found id: ""
	I0416 01:01:13.413287   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.413299   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:13.413306   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:13.413373   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:13.451705   62139 cri.go:89] found id: ""
	I0416 01:01:13.451737   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.451748   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:13.451755   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:13.451836   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:13.492549   62139 cri.go:89] found id: ""
	I0416 01:01:13.492576   62139 logs.go:276] 0 containers: []
	W0416 01:01:13.492586   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:13.492597   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:13.492613   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:13.547267   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:13.547303   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:13.568975   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:13.569002   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:13.674444   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:13.674469   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:13.674482   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:13.745111   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:13.745145   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:16.286955   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:16.301151   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:16.301257   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:16.337516   62139 cri.go:89] found id: ""
	I0416 01:01:16.337544   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.337554   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:16.337561   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:16.337623   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:16.372674   62139 cri.go:89] found id: ""
	I0416 01:01:16.372702   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.372712   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:16.372720   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:16.372783   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:16.411181   62139 cri.go:89] found id: ""
	I0416 01:01:16.411208   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.411224   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:16.411230   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:16.411283   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:16.449063   62139 cri.go:89] found id: ""
	I0416 01:01:16.449102   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.449109   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:16.449114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:16.449183   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:16.491877   62139 cri.go:89] found id: ""
	I0416 01:01:16.491909   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.491918   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:16.491924   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:16.491981   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:16.532522   62139 cri.go:89] found id: ""
	I0416 01:01:16.532553   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.532564   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:16.532572   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:16.532633   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:16.572194   62139 cri.go:89] found id: ""
	I0416 01:01:16.572222   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.572233   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:16.572240   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:16.572302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:16.614671   62139 cri.go:89] found id: ""
	I0416 01:01:16.614697   62139 logs.go:276] 0 containers: []
	W0416 01:01:16.614704   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:16.614712   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:16.614726   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:16.632146   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:16.632179   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:16.707597   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:16.707621   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:16.707633   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:16.783604   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:16.783640   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:16.828937   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:16.828977   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:12.721088   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:15.220256   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:17.222263   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:13.830983   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:16.329120   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:15.119386   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:17.120038   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:19.385008   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:19.400949   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:19.401035   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:19.463792   62139 cri.go:89] found id: ""
	I0416 01:01:19.463825   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.463836   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:19.463843   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:19.463910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:19.523289   62139 cri.go:89] found id: ""
	I0416 01:01:19.523322   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.523332   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:19.523340   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:19.523392   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:19.558891   62139 cri.go:89] found id: ""
	I0416 01:01:19.558928   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.558939   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:19.558946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:19.559009   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:19.597876   62139 cri.go:89] found id: ""
	I0416 01:01:19.597905   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.597917   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:19.597925   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:19.597980   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:19.637536   62139 cri.go:89] found id: ""
	I0416 01:01:19.637563   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.637571   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:19.637576   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:19.637623   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:19.674414   62139 cri.go:89] found id: ""
	I0416 01:01:19.674447   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.674458   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:19.674465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:19.674525   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:19.709717   62139 cri.go:89] found id: ""
	I0416 01:01:19.709751   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.709761   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:19.709769   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:19.709837   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:19.747458   62139 cri.go:89] found id: ""
	I0416 01:01:19.747482   62139 logs.go:276] 0 containers: []
	W0416 01:01:19.747489   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:19.747505   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:19.747523   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:19.834811   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:19.834846   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:19.876398   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:19.876428   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:19.931596   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:19.931632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:19.947074   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:19.947103   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:20.023434   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:19.720883   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:21.721969   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:18.829276   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:20.829405   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:19.120254   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:21.120520   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:22.524036   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:22.539399   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:22.539488   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:22.574696   62139 cri.go:89] found id: ""
	I0416 01:01:22.574723   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.574733   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:22.574741   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:22.574805   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:22.617474   62139 cri.go:89] found id: ""
	I0416 01:01:22.617503   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.617514   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:22.617521   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:22.617579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:22.657744   62139 cri.go:89] found id: ""
	I0416 01:01:22.657773   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.657781   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:22.657786   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:22.657842   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:22.695513   62139 cri.go:89] found id: ""
	I0416 01:01:22.695544   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.695552   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:22.695557   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:22.695606   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:22.732943   62139 cri.go:89] found id: ""
	I0416 01:01:22.732973   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.732983   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:22.732990   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:22.733051   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:22.768735   62139 cri.go:89] found id: ""
	I0416 01:01:22.768767   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.768775   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:22.768782   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:22.768842   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:22.804330   62139 cri.go:89] found id: ""
	I0416 01:01:22.804352   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.804361   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:22.804367   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:22.804425   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:22.842165   62139 cri.go:89] found id: ""
	I0416 01:01:22.842192   62139 logs.go:276] 0 containers: []
	W0416 01:01:22.842199   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:22.842207   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:22.842219   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:22.921859   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:22.921880   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:22.921893   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:23.003432   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:23.003468   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:23.045446   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:23.045476   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:23.097327   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:23.097358   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:25.612297   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:25.627489   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:25.627565   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:25.664040   62139 cri.go:89] found id: ""
	I0416 01:01:25.664072   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.664083   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:25.664091   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:25.664149   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:25.701004   62139 cri.go:89] found id: ""
	I0416 01:01:25.701029   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.701036   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:25.701042   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:25.701087   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:25.740108   62139 cri.go:89] found id: ""
	I0416 01:01:25.740136   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.740144   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:25.740150   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:25.740194   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:25.778413   62139 cri.go:89] found id: ""
	I0416 01:01:25.778447   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.778458   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:25.778465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:25.778530   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:25.815188   62139 cri.go:89] found id: ""
	I0416 01:01:25.815215   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.815223   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:25.815230   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:25.815277   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:25.856370   62139 cri.go:89] found id: ""
	I0416 01:01:25.856402   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.856410   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:25.856416   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:25.856476   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:25.895363   62139 cri.go:89] found id: ""
	I0416 01:01:25.895388   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.895396   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:25.895402   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:25.895455   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:25.931854   62139 cri.go:89] found id: ""
	I0416 01:01:25.931881   62139 logs.go:276] 0 containers: []
	W0416 01:01:25.931889   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:25.931897   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:25.931923   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:26.008395   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:26.008419   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:26.008436   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:26.087946   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:26.087983   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:26.134693   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:26.134725   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:26.189618   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:26.189652   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:24.220798   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:26.221193   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:22.833917   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:25.331147   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:27.331702   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:23.620819   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:25.621119   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:28.705010   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:28.719575   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:28.719644   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:28.759011   62139 cri.go:89] found id: ""
	I0416 01:01:28.759037   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.759044   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:28.759050   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:28.759112   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:28.794640   62139 cri.go:89] found id: ""
	I0416 01:01:28.794675   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.794687   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:28.794695   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:28.794807   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:28.835634   62139 cri.go:89] found id: ""
	I0416 01:01:28.835663   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.835674   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:28.835681   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:28.835747   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:28.875384   62139 cri.go:89] found id: ""
	I0416 01:01:28.875408   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.875426   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:28.875433   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:28.875484   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:28.921202   62139 cri.go:89] found id: ""
	I0416 01:01:28.921234   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.921244   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:28.921252   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:28.921314   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:28.958791   62139 cri.go:89] found id: ""
	I0416 01:01:28.958820   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.958828   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:28.958834   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:28.958923   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:28.996136   62139 cri.go:89] found id: ""
	I0416 01:01:28.996168   62139 logs.go:276] 0 containers: []
	W0416 01:01:28.996179   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:28.996185   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:28.996259   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:29.033912   62139 cri.go:89] found id: ""
	I0416 01:01:29.033939   62139 logs.go:276] 0 containers: []
	W0416 01:01:29.033946   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:29.033954   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:29.033969   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:29.114162   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:29.114209   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:29.153934   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:29.153965   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:29.207548   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:29.207584   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:29.222158   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:29.222184   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:29.297414   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:31.798026   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:31.812740   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:31.812815   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:31.855058   62139 cri.go:89] found id: ""
	I0416 01:01:31.855087   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.855098   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:31.855105   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:31.855172   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:31.897128   62139 cri.go:89] found id: ""
	I0416 01:01:31.897170   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.897192   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:31.897200   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:31.897259   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:31.934497   62139 cri.go:89] found id: ""
	I0416 01:01:31.934520   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.934532   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:31.934541   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:31.934588   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:31.974020   62139 cri.go:89] found id: ""
	I0416 01:01:31.974051   62139 logs.go:276] 0 containers: []
	W0416 01:01:31.974062   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:31.974093   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:31.974163   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:32.015433   62139 cri.go:89] found id: ""
	I0416 01:01:32.015460   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.015471   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:32.015477   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:32.015540   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:32.058286   62139 cri.go:89] found id: ""
	I0416 01:01:32.058336   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.058345   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:32.058351   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:32.058408   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:28.720596   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:30.720732   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:29.828996   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:31.830765   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:28.121038   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:30.619604   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:32.620210   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:32.100331   62139 cri.go:89] found id: ""
	I0416 01:01:32.102041   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.102054   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:32.102061   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:32.102115   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:32.141420   62139 cri.go:89] found id: ""
	I0416 01:01:32.141446   62139 logs.go:276] 0 containers: []
	W0416 01:01:32.141454   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:32.141462   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:32.141473   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:32.195323   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:32.195364   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:32.210180   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:32.210206   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:32.282548   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:32.282570   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:32.282585   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:32.360627   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:32.360663   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:34.901239   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:34.917097   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:34.917205   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:34.959297   62139 cri.go:89] found id: ""
	I0416 01:01:34.959327   62139 logs.go:276] 0 containers: []
	W0416 01:01:34.959337   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:34.959344   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:34.959422   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:35.000927   62139 cri.go:89] found id: ""
	I0416 01:01:35.000974   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.000984   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:35.001000   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:35.001064   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:35.038049   62139 cri.go:89] found id: ""
	I0416 01:01:35.038073   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.038082   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:35.038090   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:35.038143   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:35.075396   62139 cri.go:89] found id: ""
	I0416 01:01:35.075467   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.075481   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:35.075490   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:35.075591   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:35.114297   62139 cri.go:89] found id: ""
	I0416 01:01:35.114325   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.114335   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:35.114343   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:35.114405   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:35.152075   62139 cri.go:89] found id: ""
	I0416 01:01:35.152099   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.152106   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:35.152112   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:35.152161   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:35.187945   62139 cri.go:89] found id: ""
	I0416 01:01:35.187974   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.187984   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:35.187991   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:35.188057   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:35.225225   62139 cri.go:89] found id: ""
	I0416 01:01:35.225253   62139 logs.go:276] 0 containers: []
	W0416 01:01:35.225262   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:35.225272   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:35.225287   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:35.279584   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:35.279628   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:35.293416   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:35.293456   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:35.370122   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:35.370147   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:35.370159   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:35.451482   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:35.451517   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:32.723226   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:35.221390   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:34.329009   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:36.329761   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:34.620492   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:36.620527   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:37.994358   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:38.008209   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:38.008277   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:38.047905   62139 cri.go:89] found id: ""
	I0416 01:01:38.047943   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.047955   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:38.047962   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:38.048016   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:38.085749   62139 cri.go:89] found id: ""
	I0416 01:01:38.085780   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.085790   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:38.085797   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:38.085864   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:38.122396   62139 cri.go:89] found id: ""
	I0416 01:01:38.122419   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.122427   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:38.122432   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:38.122479   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:38.159284   62139 cri.go:89] found id: ""
	I0416 01:01:38.159313   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.159322   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:38.159329   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:38.159390   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:38.193245   62139 cri.go:89] found id: ""
	I0416 01:01:38.193280   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.193291   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:38.193298   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:38.193362   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:38.229147   62139 cri.go:89] found id: ""
	I0416 01:01:38.229179   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.229188   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:38.229194   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:38.229251   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:38.267285   62139 cri.go:89] found id: ""
	I0416 01:01:38.267309   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.267317   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:38.267321   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:38.267389   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:38.305181   62139 cri.go:89] found id: ""
	I0416 01:01:38.305207   62139 logs.go:276] 0 containers: []
	W0416 01:01:38.305215   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:38.305222   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:38.305237   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:38.321714   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:38.321742   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:38.398352   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:38.398372   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:38.398382   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:38.474095   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:38.474129   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:38.520540   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:38.520581   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:41.072083   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:41.086767   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:41.086860   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:41.125119   62139 cri.go:89] found id: ""
	I0416 01:01:41.125149   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.125175   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:41.125182   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:41.125253   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:41.159885   62139 cri.go:89] found id: ""
	I0416 01:01:41.159915   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.159925   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:41.159931   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:41.160012   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:41.196334   62139 cri.go:89] found id: ""
	I0416 01:01:41.196366   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.196377   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:41.196385   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:41.196447   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:41.234254   62139 cri.go:89] found id: ""
	I0416 01:01:41.234282   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.234300   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:41.234319   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:41.234413   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:41.271499   62139 cri.go:89] found id: ""
	I0416 01:01:41.271523   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.271531   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:41.271536   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:41.271604   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:41.311064   62139 cri.go:89] found id: ""
	I0416 01:01:41.311096   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.311107   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:41.311114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:41.311179   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:41.349012   62139 cri.go:89] found id: ""
	I0416 01:01:41.349043   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.349053   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:41.349060   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:41.349117   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:41.385258   62139 cri.go:89] found id: ""
	I0416 01:01:41.385298   62139 logs.go:276] 0 containers: []
	W0416 01:01:41.385305   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:41.385315   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:41.385330   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:41.470086   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:41.470130   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:41.513835   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:41.513870   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:41.565980   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:41.566013   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:41.582647   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:41.582678   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:41.658928   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:37.724628   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:40.222025   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:38.329899   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:40.330143   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:39.120850   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:41.121383   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:44.159107   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:44.173015   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:44.173088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:44.214310   62139 cri.go:89] found id: ""
	I0416 01:01:44.214345   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.214363   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:44.214374   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:44.214462   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:44.256476   62139 cri.go:89] found id: ""
	I0416 01:01:44.256503   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.256511   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:44.256516   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:44.256577   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:44.298047   62139 cri.go:89] found id: ""
	I0416 01:01:44.298079   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.298089   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:44.298097   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:44.298158   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:44.339165   62139 cri.go:89] found id: ""
	I0416 01:01:44.339196   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.339206   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:44.339213   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:44.339280   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:44.378078   62139 cri.go:89] found id: ""
	I0416 01:01:44.378108   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.378116   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:44.378122   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:44.378170   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:44.421494   62139 cri.go:89] found id: ""
	I0416 01:01:44.421525   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.421536   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:44.421543   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:44.421609   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:44.459919   62139 cri.go:89] found id: ""
	I0416 01:01:44.459948   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.459958   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:44.459965   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:44.460025   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:44.499448   62139 cri.go:89] found id: ""
	I0416 01:01:44.499479   62139 logs.go:276] 0 containers: []
	W0416 01:01:44.499489   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:44.499500   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:44.499516   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:44.555122   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:44.555159   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:44.572048   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:44.572075   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:44.646252   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:44.646283   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:44.646299   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:44.730593   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:44.730620   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:42.720855   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:44.723141   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:46.723452   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:42.831045   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:45.329039   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:47.331355   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:43.619897   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:45.620068   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:47.620162   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:47.276658   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:47.291354   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:47.291431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:47.334998   62139 cri.go:89] found id: ""
	I0416 01:01:47.335036   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.335055   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:47.335062   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:47.335121   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:47.376546   62139 cri.go:89] found id: ""
	I0416 01:01:47.376575   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.376582   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:47.376587   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:47.376647   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:47.418609   62139 cri.go:89] found id: ""
	I0416 01:01:47.418642   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.418654   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:47.418661   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:47.418721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:47.459432   62139 cri.go:89] found id: ""
	I0416 01:01:47.459458   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.459465   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:47.459470   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:47.459518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:47.497776   62139 cri.go:89] found id: ""
	I0416 01:01:47.497800   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.497808   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:47.497813   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:47.497866   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:47.536803   62139 cri.go:89] found id: ""
	I0416 01:01:47.536835   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.536842   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:47.536849   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:47.536916   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:47.575883   62139 cri.go:89] found id: ""
	I0416 01:01:47.575916   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.575923   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:47.575931   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:47.575976   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:47.627676   62139 cri.go:89] found id: ""
	I0416 01:01:47.627697   62139 logs.go:276] 0 containers: []
	W0416 01:01:47.627703   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:47.627711   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:47.627725   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:47.669714   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:47.669745   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:47.721349   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:47.721389   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:47.735833   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:47.735859   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:47.806890   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:47.806913   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:47.806925   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:50.386960   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:50.400832   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:50.400901   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:50.443042   62139 cri.go:89] found id: ""
	I0416 01:01:50.443076   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.443086   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:50.443094   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:50.443157   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:50.480495   62139 cri.go:89] found id: ""
	I0416 01:01:50.480526   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.480536   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:50.480544   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:50.480602   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:50.516578   62139 cri.go:89] found id: ""
	I0416 01:01:50.516605   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.516613   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:50.516618   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:50.516676   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:50.555302   62139 cri.go:89] found id: ""
	I0416 01:01:50.555330   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.555337   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:50.555344   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:50.555388   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:50.594647   62139 cri.go:89] found id: ""
	I0416 01:01:50.594674   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.594682   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:50.594688   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:50.594737   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:50.633401   62139 cri.go:89] found id: ""
	I0416 01:01:50.633428   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.633436   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:50.633442   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:50.633501   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:50.673714   62139 cri.go:89] found id: ""
	I0416 01:01:50.673744   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.673755   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:50.673763   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:50.673811   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:50.710103   62139 cri.go:89] found id: ""
	I0416 01:01:50.710127   62139 logs.go:276] 0 containers: []
	W0416 01:01:50.710134   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:50.710142   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:50.710153   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:50.765121   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:50.765168   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:50.780407   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:50.780436   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:50.855602   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:50.855635   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:50.855663   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:50.937249   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:50.937283   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:49.220483   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:51.724129   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:49.829742   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:52.330579   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:49.621383   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:52.120841   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:53.481261   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:53.495872   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:53.495931   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:53.532710   62139 cri.go:89] found id: ""
	I0416 01:01:53.532738   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.532748   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:53.532756   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:53.532815   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:53.568734   62139 cri.go:89] found id: ""
	I0416 01:01:53.568763   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.568770   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:53.568776   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:53.568841   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:53.608937   62139 cri.go:89] found id: ""
	I0416 01:01:53.608965   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.608976   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:53.608984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:53.609042   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:53.646538   62139 cri.go:89] found id: ""
	I0416 01:01:53.646573   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.646585   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:53.646592   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:53.646657   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:53.687761   62139 cri.go:89] found id: ""
	I0416 01:01:53.687792   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.687801   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:53.687809   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:53.687872   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:53.726126   62139 cri.go:89] found id: ""
	I0416 01:01:53.726161   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.726169   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:53.726174   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:53.726224   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:53.762583   62139 cri.go:89] found id: ""
	I0416 01:01:53.762609   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.762618   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:53.762625   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:53.762695   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:53.803685   62139 cri.go:89] found id: ""
	I0416 01:01:53.803715   62139 logs.go:276] 0 containers: []
	W0416 01:01:53.803726   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:53.803737   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:53.803751   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:53.862215   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:53.862255   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:53.877713   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:53.877743   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:53.953394   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:53.953422   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:53.953438   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:54.044657   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:54.044698   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:56.602100   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:56.616548   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:56.616632   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:56.653765   62139 cri.go:89] found id: ""
	I0416 01:01:56.653794   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.653810   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:56.653817   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:56.653879   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:56.691394   62139 cri.go:89] found id: ""
	I0416 01:01:56.691416   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.691422   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:56.691428   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:56.691475   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:56.728995   62139 cri.go:89] found id: ""
	I0416 01:01:56.729017   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.729024   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:56.729029   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:56.729078   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:56.769119   62139 cri.go:89] found id: ""
	I0416 01:01:56.769184   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.769196   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:56.769204   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:56.769270   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:56.810562   62139 cri.go:89] found id: ""
	I0416 01:01:56.810589   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.810597   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:56.810608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:56.810669   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:56.849367   62139 cri.go:89] found id: ""
	I0416 01:01:56.849392   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.849399   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:56.849405   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:56.849464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:01:56.887330   62139 cri.go:89] found id: ""
	I0416 01:01:56.887359   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.887370   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:01:56.887378   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:01:56.887461   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:01:56.926636   62139 cri.go:89] found id: ""
	I0416 01:01:56.926664   62139 logs.go:276] 0 containers: []
	W0416 01:01:56.926672   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:01:56.926682   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:01:56.926697   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:01:56.981836   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:01:56.981875   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:01:56.996385   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:01:56.996411   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:01:57.071026   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:57.071054   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:01:57.071070   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:01:54.219668   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:56.221212   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:54.829549   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:56.831452   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:54.619864   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:56.620968   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:57.155430   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:01:57.155466   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:01:59.701547   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:01:59.714465   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:01:59.714526   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:01:59.759791   62139 cri.go:89] found id: ""
	I0416 01:01:59.759830   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.759841   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:01:59.759849   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:01:59.759914   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:01:59.813303   62139 cri.go:89] found id: ""
	I0416 01:01:59.813334   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.813343   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:01:59.813353   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:01:59.813406   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:01:59.872291   62139 cri.go:89] found id: ""
	I0416 01:01:59.872328   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.872338   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:01:59.872347   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:01:59.872423   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:01:59.910397   62139 cri.go:89] found id: ""
	I0416 01:01:59.910425   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.910437   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:01:59.910444   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:01:59.910512   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:01:59.953656   62139 cri.go:89] found id: ""
	I0416 01:01:59.953685   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.953695   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:01:59.953703   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:01:59.953779   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:01:59.993193   62139 cri.go:89] found id: ""
	I0416 01:01:59.993220   62139 logs.go:276] 0 containers: []
	W0416 01:01:59.993229   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:01:59.993239   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:01:59.993298   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:00.030205   62139 cri.go:89] found id: ""
	I0416 01:02:00.030229   62139 logs.go:276] 0 containers: []
	W0416 01:02:00.030237   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:00.030242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:00.030302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:00.068160   62139 cri.go:89] found id: ""
	I0416 01:02:00.068189   62139 logs.go:276] 0 containers: []
	W0416 01:02:00.068199   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:00.068211   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:00.068226   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:00.149383   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:00.149416   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:00.188000   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:00.188025   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:00.240522   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:00.240550   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:00.254189   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:00.254215   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:00.331483   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:01:58.721272   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:01.220698   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:59.329440   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:01.830408   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:01:59.122269   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:01.619839   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:02.832656   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:02.846826   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:02.846907   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:02.883397   62139 cri.go:89] found id: ""
	I0416 01:02:02.883428   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.883439   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:02.883446   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:02.883499   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:02.923686   62139 cri.go:89] found id: ""
	I0416 01:02:02.923708   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.923715   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:02.923719   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:02.923770   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:02.964155   62139 cri.go:89] found id: ""
	I0416 01:02:02.964180   62139 logs.go:276] 0 containers: []
	W0416 01:02:02.964188   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:02.964193   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:02.964247   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:03.005357   62139 cri.go:89] found id: ""
	I0416 01:02:03.005386   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.005396   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:03.005403   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:03.005464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:03.047221   62139 cri.go:89] found id: ""
	I0416 01:02:03.047246   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.047257   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:03.047264   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:03.047326   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:03.088737   62139 cri.go:89] found id: ""
	I0416 01:02:03.088767   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.088776   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:03.088784   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:03.088846   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:03.129756   62139 cri.go:89] found id: ""
	I0416 01:02:03.129778   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.129785   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:03.129790   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:03.129837   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:03.169422   62139 cri.go:89] found id: ""
	I0416 01:02:03.169447   62139 logs.go:276] 0 containers: []
	W0416 01:02:03.169459   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:03.169468   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:03.169478   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:03.246485   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:03.246503   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:03.246514   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:03.326498   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:03.326533   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:03.372788   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:03.372817   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:03.428561   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:03.428603   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:05.944274   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:05.957744   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:05.957813   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:05.993348   62139 cri.go:89] found id: ""
	I0416 01:02:05.993400   62139 logs.go:276] 0 containers: []
	W0416 01:02:05.993411   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:05.993430   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:05.993497   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:06.034811   62139 cri.go:89] found id: ""
	I0416 01:02:06.034848   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.034859   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:06.034866   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:06.034953   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:06.079047   62139 cri.go:89] found id: ""
	I0416 01:02:06.079070   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.079078   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:06.079082   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:06.079127   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:06.122494   62139 cri.go:89] found id: ""
	I0416 01:02:06.122513   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.122520   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:06.122525   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:06.122589   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:06.163436   62139 cri.go:89] found id: ""
	I0416 01:02:06.163461   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.163468   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:06.163473   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:06.163534   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:06.205036   62139 cri.go:89] found id: ""
	I0416 01:02:06.205064   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.205072   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:06.205077   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:06.205134   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:06.242056   62139 cri.go:89] found id: ""
	I0416 01:02:06.242084   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.242094   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:06.242107   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:06.242166   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:06.278604   62139 cri.go:89] found id: ""
	I0416 01:02:06.278636   62139 logs.go:276] 0 containers: []
	W0416 01:02:06.278646   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:06.278656   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:06.278671   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:06.334631   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:06.334658   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:06.348199   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:06.348227   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:06.424774   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:06.424793   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:06.424804   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:06.503509   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:06.503542   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:03.221238   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:05.721006   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:04.329267   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:06.329476   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:03.620957   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:06.121348   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:09.046665   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:09.061072   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:09.061173   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:09.097482   62139 cri.go:89] found id: ""
	I0416 01:02:09.097514   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.097524   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:09.097543   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:09.097613   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:09.135124   62139 cri.go:89] found id: ""
	I0416 01:02:09.135157   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.135168   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:09.135175   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:09.135236   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:09.173887   62139 cri.go:89] found id: ""
	I0416 01:02:09.173912   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.173920   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:09.173925   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:09.173983   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:09.209658   62139 cri.go:89] found id: ""
	I0416 01:02:09.209683   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.209691   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:09.209702   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:09.209763   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:09.249149   62139 cri.go:89] found id: ""
	I0416 01:02:09.249200   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.249209   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:09.249214   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:09.249292   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:09.291447   62139 cri.go:89] found id: ""
	I0416 01:02:09.291477   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.291487   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:09.291494   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:09.291553   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:09.329248   62139 cri.go:89] found id: ""
	I0416 01:02:09.329271   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.329281   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:09.329288   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:09.329345   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:09.365585   62139 cri.go:89] found id: ""
	I0416 01:02:09.365613   62139 logs.go:276] 0 containers: []
	W0416 01:02:09.365622   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:09.365632   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:09.365645   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:09.418998   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:09.419031   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:09.433531   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:09.433558   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:09.508543   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:09.508573   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:09.508588   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:09.593889   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:09.593930   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:08.220704   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:10.221232   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.224680   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:08.330281   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:10.828856   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:08.619632   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:10.619780   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.621319   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.139020   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:12.154268   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:12.154349   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:12.192717   62139 cri.go:89] found id: ""
	I0416 01:02:12.192746   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.192758   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:12.192765   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:12.192832   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:12.230633   62139 cri.go:89] found id: ""
	I0416 01:02:12.230662   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.230674   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:12.230681   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:12.230729   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:12.271108   62139 cri.go:89] found id: ""
	I0416 01:02:12.271150   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.271161   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:12.271168   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:12.271233   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:12.310161   62139 cri.go:89] found id: ""
	I0416 01:02:12.310186   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.310194   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:12.310201   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:12.310272   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:12.349638   62139 cri.go:89] found id: ""
	I0416 01:02:12.349668   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.349678   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:12.349686   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:12.349766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:12.391565   62139 cri.go:89] found id: ""
	I0416 01:02:12.391597   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.391607   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:12.391620   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:12.391681   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:12.429142   62139 cri.go:89] found id: ""
	I0416 01:02:12.429186   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.429195   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:12.429200   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:12.429249   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:12.466209   62139 cri.go:89] found id: ""
	I0416 01:02:12.466238   62139 logs.go:276] 0 containers: []
	W0416 01:02:12.466249   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:12.466260   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:12.466277   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:12.551333   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:12.551355   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:12.551367   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:12.634465   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:12.634496   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:12.675198   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:12.675231   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:12.728933   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:12.728962   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:15.243521   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:15.258589   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:15.258657   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:15.301901   62139 cri.go:89] found id: ""
	I0416 01:02:15.301931   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.301943   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:15.301951   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:15.302006   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:15.345932   62139 cri.go:89] found id: ""
	I0416 01:02:15.346011   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.346032   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:15.346043   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:15.346113   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:15.387957   62139 cri.go:89] found id: ""
	I0416 01:02:15.387983   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.387991   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:15.387996   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:15.388044   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:15.424887   62139 cri.go:89] found id: ""
	I0416 01:02:15.424916   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.424927   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:15.424934   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:15.424996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:15.460088   62139 cri.go:89] found id: ""
	I0416 01:02:15.460113   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.460120   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:15.460125   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:15.460172   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:15.495567   62139 cri.go:89] found id: ""
	I0416 01:02:15.495597   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.495607   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:15.495615   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:15.495692   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:15.533901   62139 cri.go:89] found id: ""
	I0416 01:02:15.533931   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.533940   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:15.533946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:15.533996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:15.576665   62139 cri.go:89] found id: ""
	I0416 01:02:15.576692   62139 logs.go:276] 0 containers: []
	W0416 01:02:15.576702   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:15.576712   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:15.576728   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:15.626933   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:15.626961   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:15.681627   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:15.681656   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:15.695572   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:15.695608   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:15.768910   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:15.768934   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:15.768945   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:14.720472   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:16.722418   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:12.830086   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:14.830540   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:17.329838   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:15.120394   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:17.120523   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:18.349776   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:18.363499   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:18.363568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:18.404210   62139 cri.go:89] found id: ""
	I0416 01:02:18.404234   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.404241   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:18.404246   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:18.404304   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:18.444610   62139 cri.go:89] found id: ""
	I0416 01:02:18.444641   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.444651   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:18.444658   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:18.444722   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:18.483134   62139 cri.go:89] found id: ""
	I0416 01:02:18.483160   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.483168   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:18.483173   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:18.483220   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:18.522120   62139 cri.go:89] found id: ""
	I0416 01:02:18.522144   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.522156   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:18.522161   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:18.522205   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:18.566293   62139 cri.go:89] found id: ""
	I0416 01:02:18.566319   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.566327   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:18.566332   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:18.566391   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:18.604000   62139 cri.go:89] found id: ""
	I0416 01:02:18.604028   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.604036   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:18.604042   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:18.604089   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:18.641967   62139 cri.go:89] found id: ""
	I0416 01:02:18.641999   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.642009   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:18.642016   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:18.642080   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:18.683494   62139 cri.go:89] found id: ""
	I0416 01:02:18.683533   62139 logs.go:276] 0 containers: []
	W0416 01:02:18.683544   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:18.683555   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:18.683570   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:18.761674   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:18.761699   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:18.761714   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:18.849959   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:18.849995   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:18.895534   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:18.895570   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:18.949287   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:18.949320   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:21.464393   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:21.479019   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:21.479087   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:21.516262   62139 cri.go:89] found id: ""
	I0416 01:02:21.516303   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.516313   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:21.516323   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:21.516385   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:21.554279   62139 cri.go:89] found id: ""
	I0416 01:02:21.554315   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.554327   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:21.554334   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:21.554393   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:21.590889   62139 cri.go:89] found id: ""
	I0416 01:02:21.590918   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.590928   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:21.590935   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:21.590996   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:21.629925   62139 cri.go:89] found id: ""
	I0416 01:02:21.629955   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.629965   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:21.629972   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:21.630032   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:21.667947   62139 cri.go:89] found id: ""
	I0416 01:02:21.667975   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.667983   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:21.667988   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:21.668045   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:21.706275   62139 cri.go:89] found id: ""
	I0416 01:02:21.706308   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.706318   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:21.706326   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:21.706392   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:21.748077   62139 cri.go:89] found id: ""
	I0416 01:02:21.748106   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.748117   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:21.748123   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:21.748170   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:21.785441   62139 cri.go:89] found id: ""
	I0416 01:02:21.785467   62139 logs.go:276] 0 containers: []
	W0416 01:02:21.785477   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:21.785488   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:21.785510   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:21.824702   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:21.824735   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:21.882780   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:21.882810   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:21.897211   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:21.897236   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:21.971882   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:21.971903   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:21.971915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:19.220913   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:21.721219   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:19.330086   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:21.836759   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:19.620521   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:21.621229   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:24.550749   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:24.564951   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:24.565024   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:24.605025   62139 cri.go:89] found id: ""
	I0416 01:02:24.605055   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.605063   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:24.605068   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:24.605142   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:24.640727   62139 cri.go:89] found id: ""
	I0416 01:02:24.640757   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.640764   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:24.640769   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:24.640822   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:24.678031   62139 cri.go:89] found id: ""
	I0416 01:02:24.678060   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.678068   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:24.678074   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:24.678125   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:24.714854   62139 cri.go:89] found id: ""
	I0416 01:02:24.714896   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.714907   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:24.714914   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:24.714981   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:24.752129   62139 cri.go:89] found id: ""
	I0416 01:02:24.752158   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.752168   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:24.752177   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:24.752243   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:24.788507   62139 cri.go:89] found id: ""
	I0416 01:02:24.788541   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.788551   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:24.788557   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:24.788617   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:24.828379   62139 cri.go:89] found id: ""
	I0416 01:02:24.828409   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.828419   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:24.828427   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:24.828486   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:24.865676   62139 cri.go:89] found id: ""
	I0416 01:02:24.865707   62139 logs.go:276] 0 containers: []
	W0416 01:02:24.865717   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:24.865725   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:24.865736   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:24.941057   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:24.941079   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:24.941091   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:25.025937   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:25.025979   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:25.065828   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:25.065871   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:25.128004   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:25.128039   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:24.221435   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:26.720181   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:24.329677   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:26.329901   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:24.119781   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:26.120316   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:27.643201   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:27.658601   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:27.658660   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:27.700627   62139 cri.go:89] found id: ""
	I0416 01:02:27.700650   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.700657   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:27.700662   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:27.700718   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:27.734929   62139 cri.go:89] found id: ""
	I0416 01:02:27.734957   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.734966   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:27.734975   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:27.735046   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:27.772412   62139 cri.go:89] found id: ""
	I0416 01:02:27.772440   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.772448   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:27.772454   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:27.772514   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:27.809436   62139 cri.go:89] found id: ""
	I0416 01:02:27.809459   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.809466   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:27.809471   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:27.809518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:27.845717   62139 cri.go:89] found id: ""
	I0416 01:02:27.845746   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.845756   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:27.845764   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:27.845825   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:27.887224   62139 cri.go:89] found id: ""
	I0416 01:02:27.887250   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.887260   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:27.887267   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:27.887334   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:27.920945   62139 cri.go:89] found id: ""
	I0416 01:02:27.920974   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.920984   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:27.920992   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:27.921066   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:27.960933   62139 cri.go:89] found id: ""
	I0416 01:02:27.960959   62139 logs.go:276] 0 containers: []
	W0416 01:02:27.960966   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:27.960974   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:27.960985   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:28.013003   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:28.013033   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:28.026599   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:28.026626   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:28.117200   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:28.117226   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:28.117240   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:28.198003   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:28.198036   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:30.741379   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:30.757102   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:30.757199   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:30.798038   62139 cri.go:89] found id: ""
	I0416 01:02:30.798068   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.798075   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:30.798080   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:30.798137   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:30.844840   62139 cri.go:89] found id: ""
	I0416 01:02:30.844862   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.844871   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:30.844877   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:30.844944   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:30.883816   62139 cri.go:89] found id: ""
	I0416 01:02:30.883841   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.883849   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:30.883855   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:30.883903   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:30.919353   62139 cri.go:89] found id: ""
	I0416 01:02:30.919380   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.919389   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:30.919396   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:30.919457   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:30.957036   62139 cri.go:89] found id: ""
	I0416 01:02:30.957061   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.957069   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:30.957084   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:30.957143   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:30.993179   62139 cri.go:89] found id: ""
	I0416 01:02:30.993211   62139 logs.go:276] 0 containers: []
	W0416 01:02:30.993220   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:30.993228   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:30.993315   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:31.032634   62139 cri.go:89] found id: ""
	I0416 01:02:31.032661   62139 logs.go:276] 0 containers: []
	W0416 01:02:31.032670   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:31.032684   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:31.032753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:31.069345   62139 cri.go:89] found id: ""
	I0416 01:02:31.069373   62139 logs.go:276] 0 containers: []
	W0416 01:02:31.069382   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:31.069392   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:31.069408   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:31.123989   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:31.124017   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:31.140998   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:31.141032   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:31.217496   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:31.218063   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:31.218098   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:31.296811   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:31.296858   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:28.720502   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:30.720709   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:28.329978   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:30.829406   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:28.121200   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:30.620659   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:33.842516   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:33.872440   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:33.872518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:33.909287   62139 cri.go:89] found id: ""
	I0416 01:02:33.909314   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.909324   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:33.909329   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:33.909388   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:33.947531   62139 cri.go:89] found id: ""
	I0416 01:02:33.947566   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.947576   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:33.947584   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:33.947642   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:33.990084   62139 cri.go:89] found id: ""
	I0416 01:02:33.990118   62139 logs.go:276] 0 containers: []
	W0416 01:02:33.990129   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:33.990136   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:33.990200   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:34.024121   62139 cri.go:89] found id: ""
	I0416 01:02:34.024151   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.024159   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:34.024165   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:34.024218   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:34.061075   62139 cri.go:89] found id: ""
	I0416 01:02:34.061104   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.061111   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:34.061116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:34.061179   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:34.097887   62139 cri.go:89] found id: ""
	I0416 01:02:34.097928   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.097938   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:34.097946   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:34.098007   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:34.135541   62139 cri.go:89] found id: ""
	I0416 01:02:34.135567   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.135577   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:34.135585   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:34.135637   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:34.170884   62139 cri.go:89] found id: ""
	I0416 01:02:34.170910   62139 logs.go:276] 0 containers: []
	W0416 01:02:34.170920   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:34.170931   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:34.170946   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:34.223465   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:34.223494   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:34.238898   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:34.238929   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:34.316916   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:34.316946   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:34.316962   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:34.401564   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:34.401600   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:36.945789   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:36.959707   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:36.959774   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:36.994463   62139 cri.go:89] found id: ""
	I0416 01:02:36.994497   62139 logs.go:276] 0 containers: []
	W0416 01:02:36.994508   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:36.994515   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:36.994579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:37.028847   62139 cri.go:89] found id: ""
	I0416 01:02:37.028877   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.028887   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:37.028893   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:37.028954   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:37.061841   62139 cri.go:89] found id: ""
	I0416 01:02:37.061872   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.061882   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:37.061889   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:37.061954   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:37.098460   62139 cri.go:89] found id: ""
	I0416 01:02:37.098485   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.098495   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:37.098502   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:37.098569   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:33.220794   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:35.221650   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:37.222563   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:32.829517   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:34.829762   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:36.831773   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:33.121842   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:35.620647   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:37.620795   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:37.133016   62139 cri.go:89] found id: ""
	I0416 01:02:37.133044   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.133053   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:37.133059   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:37.133122   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:37.170252   62139 cri.go:89] found id: ""
	I0416 01:02:37.170276   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.170286   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:37.170293   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:37.170354   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:37.206114   62139 cri.go:89] found id: ""
	I0416 01:02:37.206141   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.206148   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:37.206153   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:37.206208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:37.241353   62139 cri.go:89] found id: ""
	I0416 01:02:37.241383   62139 logs.go:276] 0 containers: []
	W0416 01:02:37.241395   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:37.241405   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:37.241429   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:37.293452   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:37.293483   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:37.309885   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:37.309926   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:37.385455   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:37.385481   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:37.385496   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:37.463064   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:37.463101   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:40.008717   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:40.022249   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:40.022327   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:40.064444   62139 cri.go:89] found id: ""
	I0416 01:02:40.064479   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.064490   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:40.064497   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:40.064545   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:40.100326   62139 cri.go:89] found id: ""
	I0416 01:02:40.100353   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.100361   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:40.100366   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:40.100413   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:40.138818   62139 cri.go:89] found id: ""
	I0416 01:02:40.138857   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.138869   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:40.138878   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:40.138928   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:40.184203   62139 cri.go:89] found id: ""
	I0416 01:02:40.184234   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.184244   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:40.184252   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:40.184311   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:40.221968   62139 cri.go:89] found id: ""
	I0416 01:02:40.221991   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.221998   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:40.222007   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:40.222088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:40.265621   62139 cri.go:89] found id: ""
	I0416 01:02:40.265643   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.265650   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:40.265657   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:40.265723   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:40.314121   62139 cri.go:89] found id: ""
	I0416 01:02:40.314152   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.314163   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:40.314170   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:40.314229   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:40.359788   62139 cri.go:89] found id: ""
	I0416 01:02:40.359825   62139 logs.go:276] 0 containers: []
	W0416 01:02:40.359836   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:40.359849   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:40.359863   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:40.431678   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:40.431718   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:40.449847   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:40.449877   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:40.524271   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:40.524297   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:40.524309   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:40.601398   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:40.601433   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:39.720606   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:41.721437   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:39.330974   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:41.830050   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:40.120785   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:42.123996   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:43.145431   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:43.160269   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:43.160338   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:43.196603   62139 cri.go:89] found id: ""
	I0416 01:02:43.196637   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.196648   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:43.196655   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:43.196716   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:43.235863   62139 cri.go:89] found id: ""
	I0416 01:02:43.235893   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.235905   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:43.235911   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:43.235971   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:43.271408   62139 cri.go:89] found id: ""
	I0416 01:02:43.271437   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.271444   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:43.271450   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:43.271512   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:43.310931   62139 cri.go:89] found id: ""
	I0416 01:02:43.310958   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.310965   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:43.310971   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:43.311032   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:43.347472   62139 cri.go:89] found id: ""
	I0416 01:02:43.347502   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.347512   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:43.347520   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:43.347581   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:43.387326   62139 cri.go:89] found id: ""
	I0416 01:02:43.387361   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.387372   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:43.387429   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:43.387506   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:43.425099   62139 cri.go:89] found id: ""
	I0416 01:02:43.425122   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.425130   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:43.425141   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:43.425208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:43.461364   62139 cri.go:89] found id: ""
	I0416 01:02:43.461397   62139 logs.go:276] 0 containers: []
	W0416 01:02:43.461408   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:43.461419   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:43.461434   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:43.514520   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:43.514556   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:43.528740   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:43.528777   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:43.599010   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:43.599035   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:43.599051   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:43.682913   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:43.682959   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:46.231398   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:46.260247   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:46.260338   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:46.304498   62139 cri.go:89] found id: ""
	I0416 01:02:46.304521   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.304528   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:46.304534   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:46.304600   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:46.364055   62139 cri.go:89] found id: ""
	I0416 01:02:46.364081   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.364090   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:46.364098   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:46.364167   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:46.412395   62139 cri.go:89] found id: ""
	I0416 01:02:46.412437   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.412475   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:46.412510   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:46.412584   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:46.453669   62139 cri.go:89] found id: ""
	I0416 01:02:46.453698   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.453709   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:46.453716   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:46.453766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:46.490667   62139 cri.go:89] found id: ""
	I0416 01:02:46.490699   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.490709   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:46.490715   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:46.490766   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:46.529405   62139 cri.go:89] found id: ""
	I0416 01:02:46.529443   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.529460   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:46.529467   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:46.529527   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:46.565359   62139 cri.go:89] found id: ""
	I0416 01:02:46.565384   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.565391   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:46.565396   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:46.565451   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:46.609381   62139 cri.go:89] found id: ""
	I0416 01:02:46.609406   62139 logs.go:276] 0 containers: []
	W0416 01:02:46.609413   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:46.609421   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:46.609432   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:46.663080   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:46.663112   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:46.677303   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:46.677338   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:46.750134   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:46.750163   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:46.750175   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:46.829395   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:46.829434   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:43.721477   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:46.220462   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:43.831829   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:46.329333   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:44.619712   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:46.621271   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:49.374356   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:49.390674   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:49.390753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:49.427968   62139 cri.go:89] found id: ""
	I0416 01:02:49.427993   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.428000   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:49.428005   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:49.428058   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:49.461821   62139 cri.go:89] found id: ""
	I0416 01:02:49.461850   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.461857   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:49.461863   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:49.461918   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:49.496305   62139 cri.go:89] found id: ""
	I0416 01:02:49.496356   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.496364   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:49.496369   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:49.496429   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:49.536096   62139 cri.go:89] found id: ""
	I0416 01:02:49.536122   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.536129   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:49.536134   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:49.536194   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:49.572078   62139 cri.go:89] found id: ""
	I0416 01:02:49.572106   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.572115   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:49.572122   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:49.572181   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:49.607803   62139 cri.go:89] found id: ""
	I0416 01:02:49.607835   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.607847   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:49.607861   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:49.607915   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:49.651245   62139 cri.go:89] found id: ""
	I0416 01:02:49.651272   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.651280   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:49.651285   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:49.651332   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:49.693587   62139 cri.go:89] found id: ""
	I0416 01:02:49.693612   62139 logs.go:276] 0 containers: []
	W0416 01:02:49.693622   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:49.693632   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:49.693646   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:49.750003   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:49.750032   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:49.764447   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:49.764472   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:49.844739   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:49.844764   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:49.844780   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:49.924260   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:49.924294   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:48.220753   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:50.220986   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:48.330946   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:50.829409   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:49.120516   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:51.619516   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:52.467399   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:52.481656   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:52.481729   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:52.518506   62139 cri.go:89] found id: ""
	I0416 01:02:52.518531   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.518537   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:52.518544   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:52.518599   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:52.554799   62139 cri.go:89] found id: ""
	I0416 01:02:52.554820   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.554827   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:52.554832   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:52.554888   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:52.597236   62139 cri.go:89] found id: ""
	I0416 01:02:52.597265   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.597272   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:52.597278   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:52.597335   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:52.635544   62139 cri.go:89] found id: ""
	I0416 01:02:52.635567   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.635578   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:52.635585   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:52.635639   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:52.672715   62139 cri.go:89] found id: ""
	I0416 01:02:52.672739   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.672746   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:52.672751   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:52.672808   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:52.711600   62139 cri.go:89] found id: ""
	I0416 01:02:52.711631   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.711640   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:52.711648   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:52.711718   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:52.750372   62139 cri.go:89] found id: ""
	I0416 01:02:52.750405   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.750416   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:52.750423   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:52.750486   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:52.786651   62139 cri.go:89] found id: ""
	I0416 01:02:52.786678   62139 logs.go:276] 0 containers: []
	W0416 01:02:52.786688   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:52.786698   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:52.786712   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:52.840262   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:52.840296   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:52.854734   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:52.854762   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:52.931182   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:52.931211   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:52.931226   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:53.007023   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:53.007061   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:55.548305   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:55.562483   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:55.562562   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:55.599480   62139 cri.go:89] found id: ""
	I0416 01:02:55.599504   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.599511   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:55.599517   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:55.599573   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:55.636832   62139 cri.go:89] found id: ""
	I0416 01:02:55.636862   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.636873   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:55.636879   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:55.636940   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:55.676211   62139 cri.go:89] found id: ""
	I0416 01:02:55.676240   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.676250   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:55.676256   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:55.676318   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:55.713498   62139 cri.go:89] found id: ""
	I0416 01:02:55.713527   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.713537   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:55.713544   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:55.713604   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:55.754239   62139 cri.go:89] found id: ""
	I0416 01:02:55.754276   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.754284   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:55.754301   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:55.754355   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:55.792073   62139 cri.go:89] found id: ""
	I0416 01:02:55.792106   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.792117   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:55.792125   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:55.792191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:55.829635   62139 cri.go:89] found id: ""
	I0416 01:02:55.829665   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.829676   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:55.829683   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:55.829742   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:55.876417   62139 cri.go:89] found id: ""
	I0416 01:02:55.876443   62139 logs.go:276] 0 containers: []
	W0416 01:02:55.876450   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:55.876458   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:55.876471   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:55.926670   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:55.926707   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:55.941660   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:55.941696   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:56.018776   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:56.018806   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:56.018820   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:56.097335   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:56.097378   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:02:52.720703   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:55.221614   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:52.830970   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:55.329886   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:53.620969   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:56.122135   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:58.642188   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:02:58.655537   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:02:58.655605   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:02:58.692091   62139 cri.go:89] found id: ""
	I0416 01:02:58.692116   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.692124   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:02:58.692129   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:02:58.692191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:02:58.729434   62139 cri.go:89] found id: ""
	I0416 01:02:58.729461   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.729472   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:02:58.729491   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:02:58.729568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:02:58.765879   62139 cri.go:89] found id: ""
	I0416 01:02:58.765907   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.765916   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:02:58.765924   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:02:58.765987   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:02:58.802285   62139 cri.go:89] found id: ""
	I0416 01:02:58.802323   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.802334   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:02:58.802342   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:02:58.802399   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:02:58.841357   62139 cri.go:89] found id: ""
	I0416 01:02:58.841385   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.841396   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:02:58.841403   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:02:58.841464   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:02:58.876982   62139 cri.go:89] found id: ""
	I0416 01:02:58.877022   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.877032   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:02:58.877040   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:02:58.877108   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:02:58.915563   62139 cri.go:89] found id: ""
	I0416 01:02:58.915596   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.915607   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:02:58.915614   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:02:58.915683   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:02:58.951268   62139 cri.go:89] found id: ""
	I0416 01:02:58.951303   62139 logs.go:276] 0 containers: []
	W0416 01:02:58.951313   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:02:58.951324   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:02:58.951341   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:02:59.004673   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:02:59.004710   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:59.019393   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:02:59.019423   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:02:59.091587   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:02:59.091612   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:02:59.091632   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:02:59.169623   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:02:59.169655   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:01.710597   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:01.724394   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:01.724463   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:01.761577   62139 cri.go:89] found id: ""
	I0416 01:03:01.761605   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.761616   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:01.761624   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:01.761684   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:01.797467   62139 cri.go:89] found id: ""
	I0416 01:03:01.797498   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.797508   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:01.797515   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:01.797582   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:01.839910   62139 cri.go:89] found id: ""
	I0416 01:03:01.839940   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.839950   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:01.839958   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:01.840019   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:01.879572   62139 cri.go:89] found id: ""
	I0416 01:03:01.879599   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.879611   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:01.879617   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:01.879664   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:01.920190   62139 cri.go:89] found id: ""
	I0416 01:03:01.920222   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.920234   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:01.920242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:01.920300   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:01.957389   62139 cri.go:89] found id: ""
	I0416 01:03:01.957418   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.957428   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:01.957436   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:01.957507   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:01.998730   62139 cri.go:89] found id: ""
	I0416 01:03:01.998754   62139 logs.go:276] 0 containers: []
	W0416 01:03:01.998762   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:01.998767   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:01.998812   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:02.036062   62139 cri.go:89] found id: ""
	I0416 01:03:02.036094   62139 logs.go:276] 0 containers: []
	W0416 01:03:02.036103   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:02.036112   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:02.036125   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:02.089109   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:02.089149   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:02:57.720792   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:00.219899   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:02.220048   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:57.832016   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:00.328867   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:02.330238   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:02:58.620416   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:01.121496   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:02.103312   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:02.103342   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:02.174034   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:02.174056   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:02.174069   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:02.249526   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:02.249555   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:04.795314   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:04.808294   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:04.808367   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:04.848795   62139 cri.go:89] found id: ""
	I0416 01:03:04.848825   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.848849   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:04.848857   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:04.848928   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:04.886442   62139 cri.go:89] found id: ""
	I0416 01:03:04.886477   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.886488   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:04.886502   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:04.886572   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:04.929183   62139 cri.go:89] found id: ""
	I0416 01:03:04.929215   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.929226   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:04.929234   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:04.929297   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:04.965134   62139 cri.go:89] found id: ""
	I0416 01:03:04.965172   62139 logs.go:276] 0 containers: []
	W0416 01:03:04.965184   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:04.965191   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:04.965247   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:05.001346   62139 cri.go:89] found id: ""
	I0416 01:03:05.001373   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.001381   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:05.001387   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:05.001434   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:05.039181   62139 cri.go:89] found id: ""
	I0416 01:03:05.039210   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.039219   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:05.039224   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:05.039289   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:05.073451   62139 cri.go:89] found id: ""
	I0416 01:03:05.073479   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.073487   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:05.073494   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:05.073555   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:05.108466   62139 cri.go:89] found id: ""
	I0416 01:03:05.108495   62139 logs.go:276] 0 containers: []
	W0416 01:03:05.108510   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:05.108520   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:05.108537   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:05.162725   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:05.162765   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:05.178152   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:05.178183   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:05.255122   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:05.255147   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:05.255161   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:05.331274   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:05.331309   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:04.220320   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:06.220475   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:04.331381   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:06.830143   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:03.620275   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:06.121293   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:07.882980   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:07.896311   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:07.896372   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:07.934632   62139 cri.go:89] found id: ""
	I0416 01:03:07.934661   62139 logs.go:276] 0 containers: []
	W0416 01:03:07.934671   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:07.934677   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:07.934745   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:07.971463   62139 cri.go:89] found id: ""
	I0416 01:03:07.971495   62139 logs.go:276] 0 containers: []
	W0416 01:03:07.971511   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:07.971518   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:07.971581   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:08.006808   62139 cri.go:89] found id: ""
	I0416 01:03:08.006839   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.006847   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:08.006852   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:08.006912   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:08.043051   62139 cri.go:89] found id: ""
	I0416 01:03:08.043082   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.043089   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:08.043095   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:08.043155   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:08.078602   62139 cri.go:89] found id: ""
	I0416 01:03:08.078638   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.078647   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:08.078655   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:08.078724   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:08.115264   62139 cri.go:89] found id: ""
	I0416 01:03:08.115293   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.115303   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:08.115311   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:08.115378   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:08.152782   62139 cri.go:89] found id: ""
	I0416 01:03:08.152814   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.152821   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:08.152826   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:08.152875   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:08.193484   62139 cri.go:89] found id: ""
	I0416 01:03:08.193506   62139 logs.go:276] 0 containers: []
	W0416 01:03:08.193513   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:08.193522   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:08.193532   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:08.248796   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:08.248831   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:08.266054   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:08.266083   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:08.343470   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:08.343501   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:08.343515   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:08.430335   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:08.430383   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:10.972540   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:10.986911   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:10.986984   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:11.024905   62139 cri.go:89] found id: ""
	I0416 01:03:11.024939   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.024951   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:11.024958   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:11.025011   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:11.058629   62139 cri.go:89] found id: ""
	I0416 01:03:11.058654   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.058662   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:11.058667   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:11.058721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:11.093277   62139 cri.go:89] found id: ""
	I0416 01:03:11.093308   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.093317   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:11.093325   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:11.093386   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:11.131883   62139 cri.go:89] found id: ""
	I0416 01:03:11.131912   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.131924   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:11.131934   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:11.132004   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:11.175142   62139 cri.go:89] found id: ""
	I0416 01:03:11.175169   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.175179   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:11.175186   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:11.175236   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:11.209985   62139 cri.go:89] found id: ""
	I0416 01:03:11.210020   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.210031   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:11.210039   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:11.210110   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:11.246086   62139 cri.go:89] found id: ""
	I0416 01:03:11.246119   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.246129   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:11.246137   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:11.246199   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:11.286979   62139 cri.go:89] found id: ""
	I0416 01:03:11.287007   62139 logs.go:276] 0 containers: []
	W0416 01:03:11.287019   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:11.287037   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:11.287051   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:11.364522   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:11.364557   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:11.410343   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:11.410375   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:11.459671   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:11.459703   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:11.476163   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:11.476193   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:11.549544   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:08.220881   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:10.720607   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:09.329882   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:11.330570   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:08.620817   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:11.120789   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:14.050433   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:14.065375   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:14.065431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:14.105548   62139 cri.go:89] found id: ""
	I0416 01:03:14.105571   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.105579   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:14.105583   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:14.105644   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:14.146891   62139 cri.go:89] found id: ""
	I0416 01:03:14.146915   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.146922   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:14.146927   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:14.146972   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:14.183905   62139 cri.go:89] found id: ""
	I0416 01:03:14.183937   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.183948   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:14.183954   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:14.184002   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:14.219878   62139 cri.go:89] found id: ""
	I0416 01:03:14.219905   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.219915   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:14.219922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:14.219978   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:14.256284   62139 cri.go:89] found id: ""
	I0416 01:03:14.256310   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.256317   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:14.256323   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:14.256381   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:14.295932   62139 cri.go:89] found id: ""
	I0416 01:03:14.295958   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.295966   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:14.295971   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:14.296025   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:14.333202   62139 cri.go:89] found id: ""
	I0416 01:03:14.333226   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.333235   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:14.333242   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:14.333302   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:14.370034   62139 cri.go:89] found id: ""
	I0416 01:03:14.370059   62139 logs.go:276] 0 containers: []
	W0416 01:03:14.370066   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:14.370074   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:14.370092   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:14.424626   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:14.424669   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:14.441842   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:14.441872   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:14.515899   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:14.515926   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:14.515944   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:14.599956   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:14.599991   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:12.720896   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:15.220260   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:13.829944   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:16.328971   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:13.621084   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:16.120767   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:17.157610   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:17.171737   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:17.171800   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:17.214327   62139 cri.go:89] found id: ""
	I0416 01:03:17.214354   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.214364   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:17.214371   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:17.214433   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:17.255896   62139 cri.go:89] found id: ""
	I0416 01:03:17.255924   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.255939   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:17.255946   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:17.256005   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:17.298470   62139 cri.go:89] found id: ""
	I0416 01:03:17.298498   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.298512   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:17.298520   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:17.298580   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:17.338810   62139 cri.go:89] found id: ""
	I0416 01:03:17.338834   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.338842   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:17.338847   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:17.338899   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:17.375980   62139 cri.go:89] found id: ""
	I0416 01:03:17.376012   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.376019   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:17.376024   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:17.376076   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:17.411374   62139 cri.go:89] found id: ""
	I0416 01:03:17.411400   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.411408   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:17.411413   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:17.411463   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:17.452916   62139 cri.go:89] found id: ""
	I0416 01:03:17.452951   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.452962   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:17.452969   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:17.453037   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:17.492459   62139 cri.go:89] found id: ""
	I0416 01:03:17.492489   62139 logs.go:276] 0 containers: []
	W0416 01:03:17.492500   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:17.492512   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:17.492527   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:17.541780   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:17.541814   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:17.558831   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:17.558867   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:17.635332   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:17.635351   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:17.635362   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:17.715778   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:17.715809   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:20.260621   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:20.274721   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:20.274791   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:20.311965   62139 cri.go:89] found id: ""
	I0416 01:03:20.311991   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.312002   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:20.312009   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:20.312069   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:20.350316   62139 cri.go:89] found id: ""
	I0416 01:03:20.350346   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.350356   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:20.350363   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:20.350414   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:20.404666   62139 cri.go:89] found id: ""
	I0416 01:03:20.404692   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.404700   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:20.404705   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:20.404753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:20.441223   62139 cri.go:89] found id: ""
	I0416 01:03:20.441254   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.441267   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:20.441275   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:20.441340   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:20.480535   62139 cri.go:89] found id: ""
	I0416 01:03:20.480596   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.480606   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:20.480613   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:20.480680   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:20.517520   62139 cri.go:89] found id: ""
	I0416 01:03:20.517543   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.517550   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:20.517556   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:20.517614   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:20.556067   62139 cri.go:89] found id: ""
	I0416 01:03:20.556097   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.556107   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:20.556114   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:20.556177   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:20.594901   62139 cri.go:89] found id: ""
	I0416 01:03:20.594932   62139 logs.go:276] 0 containers: []
	W0416 01:03:20.594939   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:20.594947   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:20.594958   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:20.673759   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:20.673795   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:20.721407   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:20.721443   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:20.772957   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:20.772989   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:20.787902   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:20.787932   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:20.863445   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:17.721415   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:20.221042   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:18.329421   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:20.329949   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:22.330009   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:18.122678   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:20.621127   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:22.621692   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:23.363637   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:23.377916   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:23.377991   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:23.415642   62139 cri.go:89] found id: ""
	I0416 01:03:23.415671   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.415679   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:23.415685   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:23.415732   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:23.452788   62139 cri.go:89] found id: ""
	I0416 01:03:23.452812   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.452819   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:23.452829   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:23.452878   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:23.488758   62139 cri.go:89] found id: ""
	I0416 01:03:23.488785   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.488794   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:23.488801   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:23.488862   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:23.526542   62139 cri.go:89] found id: ""
	I0416 01:03:23.526574   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.526584   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:23.526592   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:23.526661   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:23.562481   62139 cri.go:89] found id: ""
	I0416 01:03:23.562505   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.562512   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:23.562518   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:23.562579   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:23.599119   62139 cri.go:89] found id: ""
	I0416 01:03:23.599145   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.599155   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:23.599162   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:23.599241   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:23.642445   62139 cri.go:89] found id: ""
	I0416 01:03:23.642474   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.642485   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:23.642492   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:23.642557   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:23.678091   62139 cri.go:89] found id: ""
	I0416 01:03:23.678113   62139 logs.go:276] 0 containers: []
	W0416 01:03:23.678121   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:23.678129   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:23.678140   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:23.731668   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:23.731703   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:23.746413   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:23.746444   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:23.821885   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:23.821908   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:23.821923   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:23.901836   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:23.901872   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:26.444935   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:26.459240   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:26.459308   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:26.499208   62139 cri.go:89] found id: ""
	I0416 01:03:26.499237   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.499249   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:26.499256   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:26.499318   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:26.536220   62139 cri.go:89] found id: ""
	I0416 01:03:26.536258   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.536270   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:26.536277   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:26.536342   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:26.576217   62139 cri.go:89] found id: ""
	I0416 01:03:26.576241   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.576249   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:26.576254   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:26.576314   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:26.612343   62139 cri.go:89] found id: ""
	I0416 01:03:26.612369   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.612378   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:26.612385   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:26.612448   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:26.651323   62139 cri.go:89] found id: ""
	I0416 01:03:26.651353   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.651365   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:26.651384   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:26.651453   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:26.688844   62139 cri.go:89] found id: ""
	I0416 01:03:26.688874   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.688885   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:26.688891   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:26.688969   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:26.724362   62139 cri.go:89] found id: ""
	I0416 01:03:26.724387   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.724395   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:26.724401   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:26.724455   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:26.767766   62139 cri.go:89] found id: ""
	I0416 01:03:26.767795   62139 logs.go:276] 0 containers: []
	W0416 01:03:26.767806   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:26.767816   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:26.767837   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:26.788269   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:26.788297   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:26.884802   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:26.884822   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:26.884834   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:26.964007   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:26.964044   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:27.003719   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:27.003745   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:22.720420   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:24.720865   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:26.721369   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:24.828766   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:26.830222   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:25.119674   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:27.620689   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:29.563218   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:29.579014   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:29.579078   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:29.620739   62139 cri.go:89] found id: ""
	I0416 01:03:29.620769   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.620780   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:29.620787   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:29.620850   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:29.658165   62139 cri.go:89] found id: ""
	I0416 01:03:29.658192   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.658199   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:29.658205   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:29.658252   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:29.693893   62139 cri.go:89] found id: ""
	I0416 01:03:29.693921   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.693929   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:29.693935   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:29.693985   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:29.737808   62139 cri.go:89] found id: ""
	I0416 01:03:29.737836   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.737846   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:29.737851   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:29.737910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:29.777382   62139 cri.go:89] found id: ""
	I0416 01:03:29.777408   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.777416   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:29.777422   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:29.777473   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:29.815633   62139 cri.go:89] found id: ""
	I0416 01:03:29.815659   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.815668   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:29.815682   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:29.815743   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:29.858790   62139 cri.go:89] found id: ""
	I0416 01:03:29.858820   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.858831   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:29.858839   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:29.858899   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:29.897085   62139 cri.go:89] found id: ""
	I0416 01:03:29.897120   62139 logs.go:276] 0 containers: []
	W0416 01:03:29.897131   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:29.897142   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:29.897169   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:29.951231   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:29.951266   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:29.965539   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:29.965565   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:30.045138   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:30.045170   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:30.045186   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:30.120575   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:30.120606   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:29.220073   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:31.221145   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:29.328625   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:31.329903   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:29.621401   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:32.120604   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:32.662210   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:32.675833   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:32.675903   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:32.712104   62139 cri.go:89] found id: ""
	I0416 01:03:32.712129   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.712136   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:32.712141   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:32.712198   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:32.749617   62139 cri.go:89] found id: ""
	I0416 01:03:32.749644   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.749652   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:32.749658   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:32.749723   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:32.785069   62139 cri.go:89] found id: ""
	I0416 01:03:32.785100   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.785110   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:32.785116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:32.785191   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:32.825871   62139 cri.go:89] found id: ""
	I0416 01:03:32.825912   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.825922   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:32.825928   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:32.826008   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:32.868294   62139 cri.go:89] found id: ""
	I0416 01:03:32.868321   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.868328   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:32.868334   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:32.868401   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:32.907764   62139 cri.go:89] found id: ""
	I0416 01:03:32.907789   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.907796   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:32.907802   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:32.907870   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:32.946112   62139 cri.go:89] found id: ""
	I0416 01:03:32.946137   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.946144   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:32.946155   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:32.946215   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:32.985343   62139 cri.go:89] found id: ""
	I0416 01:03:32.985374   62139 logs.go:276] 0 containers: []
	W0416 01:03:32.985385   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:32.985395   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:32.985415   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:33.063117   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:33.063154   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:33.113739   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:33.113773   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:33.163466   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:33.163508   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:33.178368   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:33.178397   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:33.259509   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:35.760004   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:35.774161   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:35.774237   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:35.812551   62139 cri.go:89] found id: ""
	I0416 01:03:35.812580   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.812589   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:35.812594   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:35.812642   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:35.853134   62139 cri.go:89] found id: ""
	I0416 01:03:35.853177   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.853187   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:35.853195   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:35.853255   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:35.894210   62139 cri.go:89] found id: ""
	I0416 01:03:35.894246   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.894254   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:35.894259   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:35.894330   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:35.928986   62139 cri.go:89] found id: ""
	I0416 01:03:35.929010   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.929019   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:35.929027   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:35.929090   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:35.970688   62139 cri.go:89] found id: ""
	I0416 01:03:35.970712   62139 logs.go:276] 0 containers: []
	W0416 01:03:35.970719   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:35.970725   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:35.970783   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:36.005744   62139 cri.go:89] found id: ""
	I0416 01:03:36.005771   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.005778   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:36.005783   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:36.005829   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:36.044932   62139 cri.go:89] found id: ""
	I0416 01:03:36.044966   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.044977   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:36.044984   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:36.045051   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:36.080488   62139 cri.go:89] found id: ""
	I0416 01:03:36.080516   62139 logs.go:276] 0 containers: []
	W0416 01:03:36.080527   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:36.080538   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:36.080552   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:36.132956   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:36.133000   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:36.147070   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:36.147097   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:36.226640   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:36.226670   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:36.226684   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:36.307205   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:36.307249   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:33.221952   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:35.720745   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:33.828768   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:35.830452   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:34.120695   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:36.619511   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:38.849685   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:38.863817   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:38.863897   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:38.902418   62139 cri.go:89] found id: ""
	I0416 01:03:38.902445   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.902455   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:38.902462   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:38.902533   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:38.937811   62139 cri.go:89] found id: ""
	I0416 01:03:38.937838   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.937845   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:38.937850   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:38.937900   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:38.972380   62139 cri.go:89] found id: ""
	I0416 01:03:38.972403   62139 logs.go:276] 0 containers: []
	W0416 01:03:38.972411   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:38.972416   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:38.972466   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:39.007572   62139 cri.go:89] found id: ""
	I0416 01:03:39.007595   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.007603   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:39.007608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:39.007651   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:39.049355   62139 cri.go:89] found id: ""
	I0416 01:03:39.049382   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.049391   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:39.049398   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:39.049459   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:39.084535   62139 cri.go:89] found id: ""
	I0416 01:03:39.084565   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.084574   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:39.084581   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:39.084645   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:39.125027   62139 cri.go:89] found id: ""
	I0416 01:03:39.125055   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.125073   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:39.125080   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:39.125136   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:39.164506   62139 cri.go:89] found id: ""
	I0416 01:03:39.164537   62139 logs.go:276] 0 containers: []
	W0416 01:03:39.164547   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:39.164557   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:39.164573   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:39.203447   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:39.203483   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:39.259087   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:39.259122   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:39.273611   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:39.273637   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:39.352372   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:39.352392   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:39.352407   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:41.938575   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:41.952937   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:41.953019   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:41.990771   62139 cri.go:89] found id: ""
	I0416 01:03:41.990802   62139 logs.go:276] 0 containers: []
	W0416 01:03:41.990811   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:41.990819   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:41.990881   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:42.027338   62139 cri.go:89] found id: ""
	I0416 01:03:42.027367   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.027374   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:42.027379   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:42.027431   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:42.068348   62139 cri.go:89] found id: ""
	I0416 01:03:42.068377   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.068387   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:42.068394   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:42.068457   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:38.220198   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:40.220481   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:42.221383   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:38.330729   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:40.831615   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:38.620021   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:40.620641   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:42.620702   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:42.108157   62139 cri.go:89] found id: ""
	I0416 01:03:42.108181   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.108187   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:42.108193   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:42.108244   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:42.149749   62139 cri.go:89] found id: ""
	I0416 01:03:42.149770   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.149777   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:42.149784   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:42.149848   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:42.185322   62139 cri.go:89] found id: ""
	I0416 01:03:42.185349   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.185360   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:42.185368   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:42.185435   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:42.224334   62139 cri.go:89] found id: ""
	I0416 01:03:42.224359   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.224370   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:42.224376   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:42.224435   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:42.263466   62139 cri.go:89] found id: ""
	I0416 01:03:42.263494   62139 logs.go:276] 0 containers: []
	W0416 01:03:42.263502   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:42.263509   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:42.263522   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:42.315106   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:42.315139   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:42.329394   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:42.329425   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:42.405267   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:42.405305   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:42.405321   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:42.486126   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:42.486168   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:45.027718   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:45.042387   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:45.042453   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:45.080790   62139 cri.go:89] found id: ""
	I0416 01:03:45.080814   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.080823   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:45.080829   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:45.080875   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:45.121278   62139 cri.go:89] found id: ""
	I0416 01:03:45.121306   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.121317   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:45.121324   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:45.121383   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:45.158076   62139 cri.go:89] found id: ""
	I0416 01:03:45.158099   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.158107   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:45.158116   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:45.158162   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:45.195577   62139 cri.go:89] found id: ""
	I0416 01:03:45.195608   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.195619   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:45.195627   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:45.195685   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:45.239230   62139 cri.go:89] found id: ""
	I0416 01:03:45.239257   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.239267   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:45.239275   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:45.239326   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:45.279193   62139 cri.go:89] found id: ""
	I0416 01:03:45.279220   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.279227   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:45.279232   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:45.279280   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:45.314876   62139 cri.go:89] found id: ""
	I0416 01:03:45.314908   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.314916   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:45.314922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:45.314970   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:45.351699   62139 cri.go:89] found id: ""
	I0416 01:03:45.351723   62139 logs.go:276] 0 containers: []
	W0416 01:03:45.351730   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:45.351738   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:45.351750   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:45.392681   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:45.392708   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:45.446564   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:45.446605   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:45.460541   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:45.460564   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:45.535287   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:45.535319   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:45.535334   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:44.720088   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:46.721511   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:43.329413   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:45.330644   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:45.123357   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:47.621806   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:48.117476   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:48.133341   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:48.133402   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:48.171230   62139 cri.go:89] found id: ""
	I0416 01:03:48.171263   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.171273   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:48.171280   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:48.171337   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:48.206188   62139 cri.go:89] found id: ""
	I0416 01:03:48.206218   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.206229   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:48.206236   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:48.206294   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:48.242349   62139 cri.go:89] found id: ""
	I0416 01:03:48.242377   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.242384   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:48.242389   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:48.242437   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:48.278324   62139 cri.go:89] found id: ""
	I0416 01:03:48.278347   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.278355   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:48.278360   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:48.278406   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:48.315727   62139 cri.go:89] found id: ""
	I0416 01:03:48.315753   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.315763   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:48.315770   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:48.315828   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:48.354146   62139 cri.go:89] found id: ""
	I0416 01:03:48.354169   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.354176   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:48.354182   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:48.354242   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:48.393951   62139 cri.go:89] found id: ""
	I0416 01:03:48.393989   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.394000   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:48.394007   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:48.394081   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:48.431849   62139 cri.go:89] found id: ""
	I0416 01:03:48.431887   62139 logs.go:276] 0 containers: []
	W0416 01:03:48.431895   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:48.431903   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:48.431917   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:48.446210   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:48.446242   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:48.517459   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:48.517485   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:48.517500   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:48.596320   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:48.596356   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:48.639700   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:48.639733   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:51.197396   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:51.211803   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:51.211889   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:51.250768   62139 cri.go:89] found id: ""
	I0416 01:03:51.250793   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.250802   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:51.250810   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:51.250872   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:51.291389   62139 cri.go:89] found id: ""
	I0416 01:03:51.291415   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.291421   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:51.291429   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:51.291478   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:51.332466   62139 cri.go:89] found id: ""
	I0416 01:03:51.332490   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.332499   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:51.332504   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:51.332549   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:51.367731   62139 cri.go:89] found id: ""
	I0416 01:03:51.367759   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.367767   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:51.367773   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:51.367829   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:51.400567   62139 cri.go:89] found id: ""
	I0416 01:03:51.400599   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.400609   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:51.400616   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:51.400679   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:51.433561   62139 cri.go:89] found id: ""
	I0416 01:03:51.433590   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.433598   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:51.433608   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:51.433666   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:51.469136   62139 cri.go:89] found id: ""
	I0416 01:03:51.469179   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.469189   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:51.469196   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:51.469255   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:51.504410   62139 cri.go:89] found id: ""
	I0416 01:03:51.504442   62139 logs.go:276] 0 containers: []
	W0416 01:03:51.504452   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:51.504462   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:51.504480   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:51.557420   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:51.557449   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:51.571481   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:51.571506   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:51.648722   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:51.648744   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:51.648755   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:51.728945   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:51.728978   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:49.221614   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:51.721798   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:47.829985   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:50.329419   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:52.329909   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:49.622776   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:52.120080   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:54.272503   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:54.286573   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:54.286646   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:54.321084   62139 cri.go:89] found id: ""
	I0416 01:03:54.321115   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.321125   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:54.321133   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:54.321208   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:54.366333   62139 cri.go:89] found id: ""
	I0416 01:03:54.366364   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.366374   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:54.366380   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:54.366437   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:54.406267   62139 cri.go:89] found id: ""
	I0416 01:03:54.406317   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.406328   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:54.406336   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:54.406405   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:54.446853   62139 cri.go:89] found id: ""
	I0416 01:03:54.446883   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.446894   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:54.446901   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:54.446956   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:54.487658   62139 cri.go:89] found id: ""
	I0416 01:03:54.487683   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.487690   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:54.487696   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:54.487753   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:54.530189   62139 cri.go:89] found id: ""
	I0416 01:03:54.530216   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.530226   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:54.530232   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:54.530289   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:54.571317   62139 cri.go:89] found id: ""
	I0416 01:03:54.571341   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.571349   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:54.571354   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:54.571416   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:54.612432   62139 cri.go:89] found id: ""
	I0416 01:03:54.612458   62139 logs.go:276] 0 containers: []
	W0416 01:03:54.612467   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:54.612478   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:54.612493   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:54.666599   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:54.666629   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:54.680880   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:54.680915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:54.757365   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:54.757386   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:54.757398   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:54.834436   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:54.834468   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:03:54.219690   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:56.220753   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:54.332950   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:56.830167   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:54.621002   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:56.622452   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:57.405516   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:03:57.420694   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:03:57.420773   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:03:57.460338   62139 cri.go:89] found id: ""
	I0416 01:03:57.460367   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.460374   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:03:57.460381   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:03:57.460442   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:03:57.498121   62139 cri.go:89] found id: ""
	I0416 01:03:57.498150   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.498160   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:03:57.498167   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:03:57.498228   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:03:57.536959   62139 cri.go:89] found id: ""
	I0416 01:03:57.536989   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.537005   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:03:57.537014   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:03:57.537077   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:03:57.575633   62139 cri.go:89] found id: ""
	I0416 01:03:57.575662   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.575673   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:03:57.575680   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:03:57.575743   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:03:57.614459   62139 cri.go:89] found id: ""
	I0416 01:03:57.614491   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.614501   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:03:57.614509   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:03:57.614568   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:03:57.657078   62139 cri.go:89] found id: ""
	I0416 01:03:57.657109   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.657120   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:03:57.657127   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:03:57.657204   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:03:57.693882   62139 cri.go:89] found id: ""
	I0416 01:03:57.693904   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.693911   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:03:57.693922   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:03:57.693969   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:03:57.731283   62139 cri.go:89] found id: ""
	I0416 01:03:57.731312   62139 logs.go:276] 0 containers: []
	W0416 01:03:57.731320   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:03:57.731327   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:03:57.731338   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:03:57.782618   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:03:57.782656   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:57.796763   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:03:57.796794   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:03:57.869629   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:03:57.869652   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:03:57.869665   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:03:57.948859   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:03:57.948892   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:00.487682   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:00.501095   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:00.501182   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:00.537902   62139 cri.go:89] found id: ""
	I0416 01:04:00.537931   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.537939   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:00.537945   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:00.537994   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:00.574164   62139 cri.go:89] found id: ""
	I0416 01:04:00.574203   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.574214   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:00.574222   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:00.574287   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:00.629592   62139 cri.go:89] found id: ""
	I0416 01:04:00.629615   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.629622   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:00.629627   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:00.629679   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:00.672102   62139 cri.go:89] found id: ""
	I0416 01:04:00.672127   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.672134   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:00.672141   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:00.672201   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:00.715040   62139 cri.go:89] found id: ""
	I0416 01:04:00.715064   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.715072   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:00.715078   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:00.715139   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:00.751113   62139 cri.go:89] found id: ""
	I0416 01:04:00.751137   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.751146   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:00.751152   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:00.751204   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:00.787613   62139 cri.go:89] found id: ""
	I0416 01:04:00.787644   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.787653   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:00.787660   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:00.787721   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:00.824244   62139 cri.go:89] found id: ""
	I0416 01:04:00.824271   62139 logs.go:276] 0 containers: []
	W0416 01:04:00.824280   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:00.824291   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:00.824304   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:00.899977   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:00.900014   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:00.900029   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:00.982317   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:00.982350   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:01.026354   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:01.026393   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:01.080393   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:01.080441   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:03:58.720894   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:00.720961   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:59.329460   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:01.330171   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:03:59.119259   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:01.619026   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:03.595966   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:03.609190   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:03.609253   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:03.647151   62139 cri.go:89] found id: ""
	I0416 01:04:03.647183   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.647197   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:03.647203   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:03.647250   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:03.685211   62139 cri.go:89] found id: ""
	I0416 01:04:03.685239   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.685248   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:03.685254   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:03.685303   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:03.720928   62139 cri.go:89] found id: ""
	I0416 01:04:03.720949   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.720956   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:03.720961   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:03.721035   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:03.759179   62139 cri.go:89] found id: ""
	I0416 01:04:03.759210   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.759220   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:03.759228   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:03.759290   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:03.795670   62139 cri.go:89] found id: ""
	I0416 01:04:03.795700   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.795710   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:03.795717   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:03.795785   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:03.832944   62139 cri.go:89] found id: ""
	I0416 01:04:03.832971   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.832980   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:03.832988   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:03.833053   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:03.869211   62139 cri.go:89] found id: ""
	I0416 01:04:03.869238   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.869248   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:03.869256   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:03.869317   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:03.905859   62139 cri.go:89] found id: ""
	I0416 01:04:03.905888   62139 logs.go:276] 0 containers: []
	W0416 01:04:03.905896   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:03.905904   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:03.905915   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:03.957057   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:03.957088   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:03.972309   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:03.972344   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:04.049927   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:04.049950   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:04.049965   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:04.136395   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:04.136435   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:06.676667   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:06.690062   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:06.690125   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:06.733734   62139 cri.go:89] found id: ""
	I0416 01:04:06.733758   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.733773   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:06.733782   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:06.733835   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:06.773112   62139 cri.go:89] found id: ""
	I0416 01:04:06.773140   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.773147   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:06.773152   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:06.773231   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:06.812786   62139 cri.go:89] found id: ""
	I0416 01:04:06.812809   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.812817   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:06.812822   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:06.812870   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:06.853995   62139 cri.go:89] found id: ""
	I0416 01:04:06.854022   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.854029   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:06.854034   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:06.854088   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:06.893809   62139 cri.go:89] found id: ""
	I0416 01:04:06.893841   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.893848   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:06.893853   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:06.893909   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:06.929389   62139 cri.go:89] found id: ""
	I0416 01:04:06.929419   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.929430   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:06.929437   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:06.929518   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:06.968278   62139 cri.go:89] found id: ""
	I0416 01:04:06.968303   62139 logs.go:276] 0 containers: []
	W0416 01:04:06.968311   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:06.968316   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:06.968364   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:07.018932   62139 cri.go:89] found id: ""
	I0416 01:04:07.018965   62139 logs.go:276] 0 containers: []
	W0416 01:04:07.018976   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:07.018989   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:07.019003   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:07.083611   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:07.083645   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:03.220314   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:05.720941   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:03.830050   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:06.329416   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:03.619482   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:05.620393   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:07.110126   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:07.110152   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:07.186262   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:07.186290   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:07.186305   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:07.263139   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:07.263170   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:09.807489   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:09.822045   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:04:09.822110   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:04:09.867444   62139 cri.go:89] found id: ""
	I0416 01:04:09.867469   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.867480   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:04:09.867487   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:04:09.867538   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:04:09.904280   62139 cri.go:89] found id: ""
	I0416 01:04:09.904312   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.904323   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:04:09.904330   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:04:09.904389   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:04:09.941066   62139 cri.go:89] found id: ""
	I0416 01:04:09.941091   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.941099   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:04:09.941107   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:04:09.941189   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:04:09.975739   62139 cri.go:89] found id: ""
	I0416 01:04:09.975767   62139 logs.go:276] 0 containers: []
	W0416 01:04:09.975777   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:04:09.975785   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:04:09.975844   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:04:10.011414   62139 cri.go:89] found id: ""
	I0416 01:04:10.011444   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.011454   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:04:10.011461   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:04:10.011528   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:04:10.045670   62139 cri.go:89] found id: ""
	I0416 01:04:10.045695   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.045704   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:04:10.045711   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:04:10.045777   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:04:10.082320   62139 cri.go:89] found id: ""
	I0416 01:04:10.082352   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.082361   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:04:10.082368   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:04:10.082428   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:04:10.120453   62139 cri.go:89] found id: ""
	I0416 01:04:10.120482   62139 logs.go:276] 0 containers: []
	W0416 01:04:10.120492   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:04:10.120501   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:04:10.120515   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:04:10.200213   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:04:10.200251   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:04:10.251709   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:04:10.251742   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:04:10.307348   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:04:10.307382   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0416 01:04:10.321293   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:04:10.321319   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:04:10.401361   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:04:08.220488   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:10.221408   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:08.331985   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:10.829244   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:08.119800   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:10.121093   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:12.126420   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:12.901763   62139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:04:12.916308   62139 kubeadm.go:591] duration metric: took 4m4.703830076s to restartPrimaryControlPlane
	W0416 01:04:12.916384   62139 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:04:12.916416   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:04:12.720462   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:14.721516   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:17.220364   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:12.830409   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:15.330184   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:14.620714   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:16.622203   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:17.897436   62139 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.980993606s)
	I0416 01:04:17.897592   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:04:17.914655   62139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:04:17.927482   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:04:17.940210   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:04:17.940233   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:04:17.940274   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:04:17.951037   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:04:17.951106   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:04:17.962341   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:04:17.972436   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:04:17.972500   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:04:17.983198   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:04:17.992856   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:04:17.992912   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:04:18.003122   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:04:18.014064   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:04:18.014117   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:04:18.024854   62139 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:04:18.101381   62139 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 01:04:18.101436   62139 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:04:18.246529   62139 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:04:18.246687   62139 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:04:18.246802   62139 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:04:18.456847   62139 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:04:18.458980   62139 out.go:204]   - Generating certificates and keys ...
	I0416 01:04:18.459096   62139 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:04:18.459190   62139 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:04:18.459294   62139 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:04:18.459381   62139 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:04:18.459473   62139 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:04:18.459548   62139 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:04:18.459631   62139 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:04:18.459721   62139 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:04:18.459822   62139 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:04:18.460281   62139 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:04:18.460387   62139 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:04:18.460475   62139 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:04:18.564910   62139 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:04:18.806406   62139 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:04:18.890124   62139 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:04:19.046415   62139 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:04:19.063159   62139 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:04:19.063301   62139 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:04:19.063415   62139 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:04:19.229066   62139 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:04:19.231110   62139 out.go:204]   - Booting up control plane ...
	I0416 01:04:19.231246   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:04:19.248833   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:04:19.250340   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:04:19.251664   62139 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:04:19.254678   62139 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:04:19.221976   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:21.720239   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:17.830011   61500 pod_ready.go:102] pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:18.323271   61500 pod_ready.go:81] duration metric: took 4m0.000449424s for pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace to be "Ready" ...
	E0416 01:04:18.323300   61500 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-569cc877fc-llsfr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0416 01:04:18.323318   61500 pod_ready.go:38] duration metric: took 4m9.009725319s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:04:18.323357   61500 kubeadm.go:591] duration metric: took 4m19.656264138s to restartPrimaryControlPlane
	W0416 01:04:18.323420   61500 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:04:18.323449   61500 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:04:19.122802   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:21.621389   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:24.227649   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:26.720896   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:24.119577   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:26.620166   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:29.219937   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:31.220697   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:28.622399   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:31.119279   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:33.221240   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:35.221536   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:33.124909   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:35.620718   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:37.720528   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:40.220531   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:38.120415   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:40.121126   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:42.620161   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:42.719946   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:44.720203   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:47.219782   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:44.620806   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:47.119479   62747 pod_ready.go:102] pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:47.613243   62747 pod_ready.go:81] duration metric: took 4m0.000098534s for pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace to be "Ready" ...
	E0416 01:04:47.613279   62747 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-knnvn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0416 01:04:47.613297   62747 pod_ready.go:38] duration metric: took 4m12.544704519s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:04:47.613327   62747 kubeadm.go:591] duration metric: took 4m20.76891948s to restartPrimaryControlPlane
	W0416 01:04:47.613387   62747 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:04:47.613410   62747 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:04:50.224993   61500 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (31.901526458s)
	I0416 01:04:50.225057   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:04:50.241083   61500 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:04:50.252468   61500 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:04:50.263721   61500 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:04:50.263744   61500 kubeadm.go:156] found existing configuration files:
	
	I0416 01:04:50.263786   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:04:50.274550   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:04:50.274620   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:04:50.285019   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:04:50.295079   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:04:50.295151   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:04:50.306424   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:04:50.317221   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:04:50.317286   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:04:50.327783   61500 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:04:50.338144   61500 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:04:50.338213   61500 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:04:50.349262   61500 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:04:50.410467   61500 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0-rc.2
	I0416 01:04:50.410597   61500 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:04:50.565288   61500 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:04:50.565442   61500 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:04:50.565580   61500 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:04:50.783173   61500 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:04:50.785219   61500 out.go:204]   - Generating certificates and keys ...
	I0416 01:04:50.785339   61500 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:04:50.785427   61500 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:04:50.785526   61500 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:04:50.785620   61500 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:04:50.785745   61500 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:04:50.785847   61500 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:04:50.785951   61500 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:04:50.786037   61500 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:04:50.786156   61500 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:04:50.786279   61500 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:04:50.786341   61500 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:04:50.786425   61500 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:04:50.868738   61500 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:04:51.024628   61500 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 01:04:51.304801   61500 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:04:51.485803   61500 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:04:51.614330   61500 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:04:51.615043   61500 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:04:51.617465   61500 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:04:49.720594   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:51.721464   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:51.619398   61500 out.go:204]   - Booting up control plane ...
	I0416 01:04:51.619519   61500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:04:51.619637   61500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:04:51.619717   61500 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:04:51.640756   61500 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:04:51.643264   61500 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:04:51.643617   61500 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:04:51.796506   61500 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0416 01:04:51.796640   61500 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0416 01:04:54.220965   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:56.222571   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:52.798698   61500 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002359416s
	I0416 01:04:52.798798   61500 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0416 01:04:57.802689   61500 kubeadm.go:309] [api-check] The API server is healthy after 5.003967397s
	I0416 01:04:57.816580   61500 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 01:04:57.840465   61500 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 01:04:57.879611   61500 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 01:04:57.879906   61500 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-572602 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 01:04:57.895211   61500 kubeadm.go:309] [bootstrap-token] Using token: w1qt2t.vu77oqcsegb1grvk
	I0416 01:04:57.896829   61500 out.go:204]   - Configuring RBAC rules ...
	I0416 01:04:57.896958   61500 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 01:04:57.905289   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 01:04:57.916967   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 01:04:57.922660   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 01:04:57.926143   61500 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 01:04:57.935222   61500 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 01:04:58.215180   61500 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 01:04:58.656120   61500 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 01:04:59.209811   61500 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 01:04:59.211274   61500 kubeadm.go:309] 
	I0416 01:04:59.211354   61500 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 01:04:59.211390   61500 kubeadm.go:309] 
	I0416 01:04:59.211489   61500 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 01:04:59.211512   61500 kubeadm.go:309] 
	I0416 01:04:59.211556   61500 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 01:04:59.211626   61500 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 01:04:59.211695   61500 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 01:04:59.211707   61500 kubeadm.go:309] 
	I0416 01:04:59.211779   61500 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 01:04:59.211789   61500 kubeadm.go:309] 
	I0416 01:04:59.211853   61500 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 01:04:59.211921   61500 kubeadm.go:309] 
	I0416 01:04:59.212030   61500 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 01:04:59.212165   61500 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 01:04:59.212269   61500 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 01:04:59.212280   61500 kubeadm.go:309] 
	I0416 01:04:59.212407   61500 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 01:04:59.212516   61500 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 01:04:59.212525   61500 kubeadm.go:309] 
	I0416 01:04:59.212656   61500 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token w1qt2t.vu77oqcsegb1grvk \
	I0416 01:04:59.212835   61500 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0416 01:04:59.212880   61500 kubeadm.go:309] 	--control-plane 
	I0416 01:04:59.212894   61500 kubeadm.go:309] 
	I0416 01:04:59.212996   61500 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 01:04:59.213007   61500 kubeadm.go:309] 
	I0416 01:04:59.213111   61500 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token w1qt2t.vu77oqcsegb1grvk \
	I0416 01:04:59.213278   61500 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0416 01:04:59.213435   61500 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:04:59.213460   61500 cni.go:84] Creating CNI manager for ""
	I0416 01:04:59.213477   61500 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:04:59.215397   61500 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:04:59.255478   62139 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 01:04:59.256524   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:04:59.256807   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:04:58.720339   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:01.220968   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:04:59.216764   61500 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:04:59.230134   61500 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:04:59.250739   61500 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 01:04:59.250773   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:04:59.250775   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-572602 minikube.k8s.io/updated_at=2024_04_16T01_04_59_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=no-preload-572602 minikube.k8s.io/primary=true
	I0416 01:04:59.462907   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:04:59.462915   61500 ops.go:34] apiserver oom_adj: -16
	I0416 01:04:59.962977   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:00.463142   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:00.963871   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:01.463866   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:01.963356   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:02.463729   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:04.257472   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:04.257756   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:03.720762   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:05.721421   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:02.963816   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:03.463370   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:03.963655   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:04.463681   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:04.963387   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:05.462926   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:05.963659   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:06.463091   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:06.963504   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:07.463783   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:07.963037   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:08.463212   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:08.963443   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:09.463179   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:09.963188   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:10.463264   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:10.963863   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:11.463051   61500 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:11.591367   61500 kubeadm.go:1107] duration metric: took 12.340665724s to wait for elevateKubeSystemPrivileges
	W0416 01:05:11.591410   61500 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:05:11.591425   61500 kubeadm.go:393] duration metric: took 5m12.980123227s to StartCluster
	I0416 01:05:11.591451   61500 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:11.591559   61500 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:05:11.593498   61500 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:11.593838   61500 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.121 Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:05:11.595572   61500 out.go:177] * Verifying Kubernetes components...
	I0416 01:05:11.593961   61500 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:05:11.594060   61500 config.go:182] Loaded profile config "no-preload-572602": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.0-rc.2
	I0416 01:05:11.597038   61500 addons.go:69] Setting default-storageclass=true in profile "no-preload-572602"
	I0416 01:05:11.597047   61500 addons.go:69] Setting metrics-server=true in profile "no-preload-572602"
	I0416 01:05:11.597077   61500 addons.go:234] Setting addon metrics-server=true in "no-preload-572602"
	I0416 01:05:11.597081   61500 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-572602"
	W0416 01:05:11.597084   61500 addons.go:243] addon metrics-server should already be in state true
	I0416 01:05:11.597168   61500 host.go:66] Checking if "no-preload-572602" exists ...
	I0416 01:05:11.597042   61500 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:05:11.597038   61500 addons.go:69] Setting storage-provisioner=true in profile "no-preload-572602"
	I0416 01:05:11.597274   61500 addons.go:234] Setting addon storage-provisioner=true in "no-preload-572602"
	W0416 01:05:11.597281   61500 addons.go:243] addon storage-provisioner should already be in state true
	I0416 01:05:11.597300   61500 host.go:66] Checking if "no-preload-572602" exists ...
	I0416 01:05:11.597516   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.597563   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.597590   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.597621   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.597621   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.597684   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.617344   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46065
	I0416 01:05:11.617833   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46345
	I0416 01:05:11.617853   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.618040   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32847
	I0416 01:05:11.618170   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.618385   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.618539   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.618564   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.618682   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.618708   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.618786   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.618806   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.619020   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.619035   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.619145   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.619371   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.619629   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.619663   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.619683   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.619715   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.622758   61500 addons.go:234] Setting addon default-storageclass=true in "no-preload-572602"
	W0416 01:05:11.622784   61500 addons.go:243] addon default-storageclass should already be in state true
	I0416 01:05:11.622814   61500 host.go:66] Checking if "no-preload-572602" exists ...
	I0416 01:05:11.623148   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.623182   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.640851   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44015
	I0416 01:05:11.641427   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.642008   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.642028   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.642429   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.642635   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.643204   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41753
	I0416 01:05:11.643239   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38953
	I0416 01:05:11.643578   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.643673   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.644133   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.644150   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.644398   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.644409   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.644508   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.644786   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.644823   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.645630   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 01:05:11.645797   61500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:11.645824   61500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:11.648522   61500 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 01:05:11.646649   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 01:05:11.650173   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 01:05:11.650185   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 01:05:11.650206   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 01:05:11.652524   61500 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:05:07.721798   61267 pod_ready.go:102] pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:08.214615   61267 pod_ready.go:81] duration metric: took 4m0.001005317s for pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace to be "Ready" ...
	E0416 01:05:08.214650   61267 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-9cnv2" in "kube-system" namespace to be "Ready" (will not retry!)
	I0416 01:05:08.214688   61267 pod_ready.go:38] duration metric: took 4m14.521894608s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:08.214750   61267 kubeadm.go:591] duration metric: took 4m22.563492336s to restartPrimaryControlPlane
	W0416 01:05:08.214821   61267 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0416 01:05:08.214857   61267 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:05:11.654173   61500 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:11.654189   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:05:11.654207   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 01:05:11.654021   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.654488   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 01:05:11.654524   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.654823   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 01:05:11.655016   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 01:05:11.655159   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 01:05:11.655331   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 01:05:11.657706   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.658193   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 01:05:11.658214   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.658388   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 01:05:11.658585   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 01:05:11.658761   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 01:05:11.658937   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 01:05:11.669485   61500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34717
	I0416 01:05:11.669878   61500 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:11.670340   61500 main.go:141] libmachine: Using API Version  1
	I0416 01:05:11.670352   61500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:11.670714   61500 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:11.670887   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetState
	I0416 01:05:11.672571   61500 main.go:141] libmachine: (no-preload-572602) Calling .DriverName
	I0416 01:05:11.672888   61500 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:11.672900   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:05:11.672912   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHHostname
	I0416 01:05:11.675816   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.676163   61500 main.go:141] libmachine: (no-preload-572602) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fb:a5:f3", ip: ""} in network mk-no-preload-572602: {Iface:virbr2 ExpiryTime:2024-04-16 01:59:32 +0000 UTC Type:0 Mac:52:54:00:fb:a5:f3 Iaid: IPaddr:192.168.39.121 Prefix:24 Hostname:no-preload-572602 Clientid:01:52:54:00:fb:a5:f3}
	I0416 01:05:11.676182   61500 main.go:141] libmachine: (no-preload-572602) DBG | domain no-preload-572602 has defined IP address 192.168.39.121 and MAC address 52:54:00:fb:a5:f3 in network mk-no-preload-572602
	I0416 01:05:11.676335   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHPort
	I0416 01:05:11.676513   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHKeyPath
	I0416 01:05:11.676657   61500 main.go:141] libmachine: (no-preload-572602) Calling .GetSSHUsername
	I0416 01:05:11.676799   61500 sshutil.go:53] new ssh client: &{IP:192.168.39.121 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/no-preload-572602/id_rsa Username:docker}
	I0416 01:05:11.822229   61500 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:05:11.850495   61500 node_ready.go:35] waiting up to 6m0s for node "no-preload-572602" to be "Ready" ...
	I0416 01:05:11.868828   61500 node_ready.go:49] node "no-preload-572602" has status "Ready":"True"
	I0416 01:05:11.868852   61500 node_ready.go:38] duration metric: took 18.327813ms for node "no-preload-572602" to be "Ready" ...
	I0416 01:05:11.868860   61500 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:11.877018   61500 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.884190   61500 pod_ready.go:92] pod "etcd-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:11.884221   61500 pod_ready.go:81] duration metric: took 7.173699ms for pod "etcd-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.884234   61500 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.901639   61500 pod_ready.go:92] pod "kube-apiserver-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:11.901672   61500 pod_ready.go:81] duration metric: took 17.430111ms for pod "kube-apiserver-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.901684   61500 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.911839   61500 pod_ready.go:92] pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:11.911871   61500 pod_ready.go:81] duration metric: took 10.178219ms for pod "kube-controller-manager-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.911885   61500 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:11.936265   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 01:05:11.936293   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 01:05:11.939406   61500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:11.942233   61500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:11.963094   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 01:05:11.963123   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 01:05:12.027316   61500 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:12.027341   61500 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 01:05:12.150413   61500 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:12.387284   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.387310   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.387640   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.387665   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.387674   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.387682   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.387973   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.387991   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.395148   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.395179   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.395459   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:12.395488   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.395508   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.930331   61500 pod_ready.go:92] pod "kube-scheduler-no-preload-572602" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:12.930362   61500 pod_ready.go:81] duration metric: took 1.01846846s for pod "kube-scheduler-no-preload-572602" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:12.930373   61500 pod_ready.go:38] duration metric: took 1.061502471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:12.930390   61500 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:05:12.930454   61500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:05:12.990840   61500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.048571147s)
	I0416 01:05:12.990905   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.990919   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.991246   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:12.991309   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.991323   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:12.991380   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:12.991391   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:12.991617   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:12.991669   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:12.991690   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:13.719959   61500 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.569495387s)
	I0416 01:05:13.720018   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:13.720023   61500 api_server.go:72] duration metric: took 2.12614679s to wait for apiserver process to appear ...
	I0416 01:05:13.720046   61500 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:05:13.720066   61500 api_server.go:253] Checking apiserver healthz at https://192.168.39.121:8443/healthz ...
	I0416 01:05:13.720034   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:13.720435   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:13.720458   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:13.720469   61500 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:13.720472   61500 main.go:141] libmachine: (no-preload-572602) DBG | Closing plugin on server side
	I0416 01:05:13.720477   61500 main.go:141] libmachine: (no-preload-572602) Calling .Close
	I0416 01:05:13.720670   61500 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:13.720681   61500 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:13.720691   61500 addons.go:470] Verifying addon metrics-server=true in "no-preload-572602"
	I0416 01:05:13.722348   61500 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0416 01:05:13.723686   61500 addons.go:505] duration metric: took 2.129734353s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0416 01:05:13.764481   61500 api_server.go:279] https://192.168.39.121:8443/healthz returned 200:
	ok
	I0416 01:05:13.771661   61500 api_server.go:141] control plane version: v1.30.0-rc.2
	I0416 01:05:13.771690   61500 api_server.go:131] duration metric: took 51.637739ms to wait for apiserver health ...
	I0416 01:05:13.771698   61500 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:05:13.812701   61500 system_pods.go:59] 9 kube-system pods found
	I0416 01:05:13.812744   61500 system_pods.go:61] "coredns-7db6d8ff4d-2b5ht" [b8d48a4c-6efd-409a-98be-3ec5bf639470] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.812753   61500 system_pods.go:61] "coredns-7db6d8ff4d-p62sn" [36768eb2-2a22-48e1-b271-f262aa64e014] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.812761   61500 system_pods.go:61] "etcd-no-preload-572602" [c9ed4f86-07f3-48d6-948c-8c4243920512] Running
	I0416 01:05:13.812765   61500 system_pods.go:61] "kube-apiserver-no-preload-572602" [a92513a3-4129-41a2-a603-4a69f4e72041] Running
	I0416 01:05:13.812768   61500 system_pods.go:61] "kube-controller-manager-no-preload-572602" [ce013e5b-5d3c-42de-8a00-c7041288740b] Running
	I0416 01:05:13.812774   61500 system_pods.go:61] "kube-proxy-6cjlc" [2c4d9303-8c08-4385-a6b9-63dda0d9a274] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0416 01:05:13.812777   61500 system_pods.go:61] "kube-scheduler-no-preload-572602" [a9f71ca2-f211-4e6d-9940-4e0af5d4287e] Running
	I0416 01:05:13.812783   61500 system_pods.go:61] "metrics-server-569cc877fc-5j5rc" [3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:13.812792   61500 system_pods.go:61] "storage-provisioner" [b9ac9c93-0e50-4598-a9c4-a12e4ff14063] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:13.812802   61500 system_pods.go:74] duration metric: took 41.098881ms to wait for pod list to return data ...
	I0416 01:05:13.812811   61500 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:05:13.847288   61500 default_sa.go:45] found service account: "default"
	I0416 01:05:13.847323   61500 default_sa.go:55] duration metric: took 34.500938ms for default service account to be created ...
	I0416 01:05:13.847335   61500 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:05:13.877107   61500 system_pods.go:86] 9 kube-system pods found
	I0416 01:05:13.877150   61500 system_pods.go:89] "coredns-7db6d8ff4d-2b5ht" [b8d48a4c-6efd-409a-98be-3ec5bf639470] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.877175   61500 system_pods.go:89] "coredns-7db6d8ff4d-p62sn" [36768eb2-2a22-48e1-b271-f262aa64e014] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:13.877185   61500 system_pods.go:89] "etcd-no-preload-572602" [c9ed4f86-07f3-48d6-948c-8c4243920512] Running
	I0416 01:05:13.877194   61500 system_pods.go:89] "kube-apiserver-no-preload-572602" [a92513a3-4129-41a2-a603-4a69f4e72041] Running
	I0416 01:05:13.877200   61500 system_pods.go:89] "kube-controller-manager-no-preload-572602" [ce013e5b-5d3c-42de-8a00-c7041288740b] Running
	I0416 01:05:13.877209   61500 system_pods.go:89] "kube-proxy-6cjlc" [2c4d9303-8c08-4385-a6b9-63dda0d9a274] Running
	I0416 01:05:13.877215   61500 system_pods.go:89] "kube-scheduler-no-preload-572602" [a9f71ca2-f211-4e6d-9940-4e0af5d4287e] Running
	I0416 01:05:13.877224   61500 system_pods.go:89] "metrics-server-569cc877fc-5j5rc" [3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:13.877237   61500 system_pods.go:89] "storage-provisioner" [b9ac9c93-0e50-4598-a9c4-a12e4ff14063] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:13.877257   61500 retry.go:31] will retry after 239.706522ms: missing components: kube-dns
	I0416 01:05:14.128770   61500 system_pods.go:86] 9 kube-system pods found
	I0416 01:05:14.128814   61500 system_pods.go:89] "coredns-7db6d8ff4d-2b5ht" [b8d48a4c-6efd-409a-98be-3ec5bf639470] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:14.128827   61500 system_pods.go:89] "coredns-7db6d8ff4d-p62sn" [36768eb2-2a22-48e1-b271-f262aa64e014] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0416 01:05:14.128836   61500 system_pods.go:89] "etcd-no-preload-572602" [c9ed4f86-07f3-48d6-948c-8c4243920512] Running
	I0416 01:05:14.128850   61500 system_pods.go:89] "kube-apiserver-no-preload-572602" [a92513a3-4129-41a2-a603-4a69f4e72041] Running
	I0416 01:05:14.128857   61500 system_pods.go:89] "kube-controller-manager-no-preload-572602" [ce013e5b-5d3c-42de-8a00-c7041288740b] Running
	I0416 01:05:14.128864   61500 system_pods.go:89] "kube-proxy-6cjlc" [2c4d9303-8c08-4385-a6b9-63dda0d9a274] Running
	I0416 01:05:14.128871   61500 system_pods.go:89] "kube-scheduler-no-preload-572602" [a9f71ca2-f211-4e6d-9940-4e0af5d4287e] Running
	I0416 01:05:14.128885   61500 system_pods.go:89] "metrics-server-569cc877fc-5j5rc" [3d8f1a41-8e7d-4d1b-9a07-25c8fac3b782] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:14.128893   61500 system_pods.go:89] "storage-provisioner" [b9ac9c93-0e50-4598-a9c4-a12e4ff14063] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:14.128903   61500 system_pods.go:126] duration metric: took 281.561287ms to wait for k8s-apps to be running ...
	I0416 01:05:14.128912   61500 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:05:14.128978   61500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:14.145557   61500 system_svc.go:56] duration metric: took 16.639555ms WaitForService to wait for kubelet
	I0416 01:05:14.145582   61500 kubeadm.go:576] duration metric: took 2.551711031s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:05:14.145605   61500 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:05:14.149984   61500 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:05:14.150009   61500 node_conditions.go:123] node cpu capacity is 2
	I0416 01:05:14.150021   61500 node_conditions.go:105] duration metric: took 4.410684ms to run NodePressure ...
	I0416 01:05:14.150034   61500 start.go:240] waiting for startup goroutines ...
	I0416 01:05:14.150044   61500 start.go:245] waiting for cluster config update ...
	I0416 01:05:14.150064   61500 start.go:254] writing updated cluster config ...
	I0416 01:05:14.150354   61500 ssh_runner.go:195] Run: rm -f paused
	I0416 01:05:14.198605   61500 start.go:600] kubectl: 1.29.3, cluster: 1.30.0-rc.2 (minor skew: 1)
	I0416 01:05:14.200584   61500 out.go:177] * Done! kubectl is now configured to use "no-preload-572602" cluster and "default" namespace by default
	I0416 01:05:14.258629   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:14.258807   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:19.748784   62747 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.135339447s)
	I0416 01:05:19.748866   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:19.766280   62747 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:05:19.777541   62747 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:05:19.788086   62747 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:05:19.788112   62747 kubeadm.go:156] found existing configuration files:
	
	I0416 01:05:19.788154   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:05:19.798135   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:05:19.798211   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:05:19.809231   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:05:19.819447   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:05:19.819519   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:05:19.830223   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:05:19.840460   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:05:19.840528   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:05:19.851506   62747 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:05:19.861422   62747 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:05:19.861481   62747 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:05:19.871239   62747 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:05:20.089849   62747 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:05:29.079351   62747 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 01:05:29.079435   62747 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:05:29.079534   62747 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:05:29.079679   62747 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:05:29.079817   62747 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:05:29.079934   62747 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:05:29.081701   62747 out.go:204]   - Generating certificates and keys ...
	I0416 01:05:29.081801   62747 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:05:29.081922   62747 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:05:29.082035   62747 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:05:29.082125   62747 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:05:29.082300   62747 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:05:29.082404   62747 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:05:29.082504   62747 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:05:29.082556   62747 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:05:29.082621   62747 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:05:29.082737   62747 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:05:29.082798   62747 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:05:29.082867   62747 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:05:29.082955   62747 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:05:29.083042   62747 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 01:05:29.083129   62747 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:05:29.083209   62747 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:05:29.083278   62747 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:05:29.083385   62747 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:05:29.083467   62747 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:05:29.085050   62747 out.go:204]   - Booting up control plane ...
	I0416 01:05:29.085178   62747 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:05:29.085289   62747 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:05:29.085374   62747 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:05:29.085499   62747 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:05:29.085610   62747 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:05:29.085671   62747 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:05:29.085942   62747 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:05:29.086066   62747 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003717 seconds
	I0416 01:05:29.086227   62747 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 01:05:29.086384   62747 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 01:05:29.086474   62747 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 01:05:29.086755   62747 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-617092 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 01:05:29.086843   62747 kubeadm.go:309] [bootstrap-token] Using token: 33ihar.pt6l329bwmm6yhnr
	I0416 01:05:29.088273   62747 out.go:204]   - Configuring RBAC rules ...
	I0416 01:05:29.088408   62747 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 01:05:29.088516   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 01:05:29.088712   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 01:05:29.088898   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 01:05:29.089046   62747 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 01:05:29.089196   62747 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 01:05:29.089346   62747 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 01:05:29.089413   62747 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 01:05:29.089486   62747 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 01:05:29.089496   62747 kubeadm.go:309] 
	I0416 01:05:29.089581   62747 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 01:05:29.089591   62747 kubeadm.go:309] 
	I0416 01:05:29.089707   62747 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 01:05:29.089719   62747 kubeadm.go:309] 
	I0416 01:05:29.089768   62747 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 01:05:29.089855   62747 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 01:05:29.089932   62747 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 01:05:29.089942   62747 kubeadm.go:309] 
	I0416 01:05:29.090020   62747 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 01:05:29.090041   62747 kubeadm.go:309] 
	I0416 01:05:29.090111   62747 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 01:05:29.090120   62747 kubeadm.go:309] 
	I0416 01:05:29.090193   62747 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 01:05:29.090350   62747 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 01:05:29.090434   62747 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 01:05:29.090445   62747 kubeadm.go:309] 
	I0416 01:05:29.090560   62747 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 01:05:29.090661   62747 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 01:05:29.090667   62747 kubeadm.go:309] 
	I0416 01:05:29.090773   62747 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 33ihar.pt6l329bwmm6yhnr \
	I0416 01:05:29.090921   62747 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0416 01:05:29.090942   62747 kubeadm.go:309] 	--control-plane 
	I0416 01:05:29.090948   62747 kubeadm.go:309] 
	I0416 01:05:29.091017   62747 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 01:05:29.091034   62747 kubeadm.go:309] 
	I0416 01:05:29.091153   62747 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 33ihar.pt6l329bwmm6yhnr \
	I0416 01:05:29.091299   62747 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0416 01:05:29.091313   62747 cni.go:84] Creating CNI manager for ""
	I0416 01:05:29.091323   62747 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:05:29.094154   62747 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:05:29.095747   62747 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:05:29.153706   62747 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:05:29.195477   62747 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 01:05:29.195540   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:29.195540   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-617092 minikube.k8s.io/updated_at=2024_04_16T01_05_29_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=embed-certs-617092 minikube.k8s.io/primary=true
	I0416 01:05:29.551888   62747 ops.go:34] apiserver oom_adj: -16
	I0416 01:05:29.552023   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:30.053117   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:30.552298   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:31.052317   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:31.553057   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:32.052852   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:32.552921   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:34.259492   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:05:34.259704   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:05:33.052747   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:33.552301   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:34.052922   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:34.552338   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:35.052106   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:35.552911   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:36.052814   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:36.552077   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:37.052666   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:37.552057   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:38.053198   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:38.552163   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:39.052589   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:39.552701   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:40.053069   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:40.552436   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:41.053071   62747 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:41.158552   62747 kubeadm.go:1107] duration metric: took 11.963074905s to wait for elevateKubeSystemPrivileges
	W0416 01:05:41.158601   62747 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:05:41.158611   62747 kubeadm.go:393] duration metric: took 5m14.369080866s to StartCluster
	I0416 01:05:41.158638   62747 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:41.158736   62747 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:05:41.160903   62747 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:05:41.161229   62747 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.225 Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:05:41.163312   62747 out.go:177] * Verifying Kubernetes components...
	I0416 01:05:40.562916   61267 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.348033752s)
	I0416 01:05:40.562991   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:40.580700   61267 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0416 01:05:40.592069   61267 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:05:40.606450   61267 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:05:40.606477   61267 kubeadm.go:156] found existing configuration files:
	
	I0416 01:05:40.606531   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0416 01:05:40.617547   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:05:40.617622   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:05:40.631465   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0416 01:05:40.644464   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:05:40.644553   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:05:40.655929   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0416 01:05:40.664995   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:05:40.665059   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:05:40.674477   61267 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0416 01:05:40.683500   61267 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:05:40.683570   61267 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:05:40.693774   61267 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:05:40.753612   61267 kubeadm.go:309] [init] Using Kubernetes version: v1.29.3
	I0416 01:05:40.753717   61267 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:05:40.911483   61267 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:05:40.911609   61267 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:05:40.911748   61267 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:05:41.170137   61267 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:05:41.161331   62747 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:05:41.161434   62747 config.go:182] Loaded profile config "embed-certs-617092": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:05:41.165023   62747 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-617092"
	I0416 01:05:41.165044   62747 addons.go:69] Setting metrics-server=true in profile "embed-certs-617092"
	I0416 01:05:41.165081   62747 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-617092"
	I0416 01:05:41.165084   62747 addons.go:234] Setting addon metrics-server=true in "embed-certs-617092"
	W0416 01:05:41.165090   62747 addons.go:243] addon storage-provisioner should already be in state true
	W0416 01:05:41.165091   62747 addons.go:243] addon metrics-server should already be in state true
	I0416 01:05:41.165117   62747 host.go:66] Checking if "embed-certs-617092" exists ...
	I0416 01:05:41.165052   62747 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0416 01:05:41.165025   62747 addons.go:69] Setting default-storageclass=true in profile "embed-certs-617092"
	I0416 01:05:41.165117   62747 host.go:66] Checking if "embed-certs-617092" exists ...
	I0416 01:05:41.165174   62747 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-617092"
	I0416 01:05:41.165464   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.165480   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.165549   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.165569   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.165549   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.165651   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.183063   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46083
	I0416 01:05:41.183551   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.184135   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.184158   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.184578   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.185298   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.185337   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.185763   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43633
	I0416 01:05:41.185823   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46197
	I0416 01:05:41.186233   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.186400   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.186701   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.186726   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.186861   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.186881   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.187211   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.187233   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.187415   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.187763   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.187781   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.191018   62747 addons.go:234] Setting addon default-storageclass=true in "embed-certs-617092"
	W0416 01:05:41.191038   62747 addons.go:243] addon default-storageclass should already be in state true
	I0416 01:05:41.191068   62747 host.go:66] Checking if "embed-certs-617092" exists ...
	I0416 01:05:41.191346   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.191384   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.202643   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45285
	I0416 01:05:41.203122   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.203607   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.203627   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.203952   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.204124   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.204325   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45643
	I0416 01:05:41.204721   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.205188   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.205207   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.205860   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.206056   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.206084   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:05:41.208051   62747 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 01:05:41.209179   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 01:05:41.209197   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 01:05:41.207724   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:05:41.209214   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:05:41.210728   62747 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:05:41.171860   61267 out.go:204]   - Generating certificates and keys ...
	I0416 01:05:41.171969   61267 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:05:41.172043   61267 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:05:41.172139   61267 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:05:41.172803   61267 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:05:41.173065   61267 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:05:41.173653   61267 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:05:41.174077   61267 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:05:41.174586   61267 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:05:41.175034   61267 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:05:41.175570   61267 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:05:41.175888   61267 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:05:41.175968   61267 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:05:41.439471   61267 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:05:41.524693   61267 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0416 01:05:42.001762   61267 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:05:42.139805   61267 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:05:42.198091   61267 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:05:42.198762   61267 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:05:42.202915   61267 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:05:42.204549   61267 out.go:204]   - Booting up control plane ...
	I0416 01:05:42.204673   61267 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:05:42.204816   61267 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:05:42.205761   61267 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:05:42.225187   61267 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:05:42.225917   61267 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:05:42.225972   61267 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:05:42.367087   61267 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:05:41.210575   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34385
	I0416 01:05:41.211905   62747 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:41.211923   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:05:41.211942   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:05:41.212835   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.212972   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.213577   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.213597   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.213610   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:05:41.213628   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.214039   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.214657   62747 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:05:41.214693   62747 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:05:41.215005   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:05:41.215635   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.215905   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:05:41.215933   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.216058   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:05:41.216109   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:05:41.216242   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:05:41.216303   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:05:41.216447   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:05:41.216466   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:05:41.216544   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:05:41.236284   62747 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40007
	I0416 01:05:41.237670   62747 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:05:41.238270   62747 main.go:141] libmachine: Using API Version  1
	I0416 01:05:41.238288   62747 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:05:41.241258   62747 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:05:41.241453   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetState
	I0416 01:05:41.243397   62747 main.go:141] libmachine: (embed-certs-617092) Calling .DriverName
	I0416 01:05:41.243724   62747 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:41.243740   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:05:41.243758   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHHostname
	I0416 01:05:41.247426   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.248034   62747 main.go:141] libmachine: (embed-certs-617092) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:1b:62", ip: ""} in network mk-embed-certs-617092: {Iface:virbr1 ExpiryTime:2024-04-16 01:54:32 +0000 UTC Type:0 Mac:52:54:00:86:1b:62 Iaid: IPaddr:192.168.61.225 Prefix:24 Hostname:embed-certs-617092 Clientid:01:52:54:00:86:1b:62}
	I0416 01:05:41.248144   62747 main.go:141] libmachine: (embed-certs-617092) DBG | domain embed-certs-617092 has defined IP address 192.168.61.225 and MAC address 52:54:00:86:1b:62 in network mk-embed-certs-617092
	I0416 01:05:41.248423   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHPort
	I0416 01:05:41.249376   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHKeyPath
	I0416 01:05:41.249600   62747 main.go:141] libmachine: (embed-certs-617092) Calling .GetSSHUsername
	I0416 01:05:41.249799   62747 sshutil.go:53] new ssh client: &{IP:192.168.61.225 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/embed-certs-617092/id_rsa Username:docker}
	I0416 01:05:41.414823   62747 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:05:41.436007   62747 node_ready.go:35] waiting up to 6m0s for node "embed-certs-617092" to be "Ready" ...
	I0416 01:05:41.452344   62747 node_ready.go:49] node "embed-certs-617092" has status "Ready":"True"
	I0416 01:05:41.452370   62747 node_ready.go:38] duration metric: took 16.328329ms for node "embed-certs-617092" to be "Ready" ...
	I0416 01:05:41.452382   62747 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:41.467673   62747 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.477985   62747 pod_ready.go:92] pod "etcd-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:41.478019   62747 pod_ready.go:81] duration metric: took 10.312538ms for pod "etcd-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.478032   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.485978   62747 pod_ready.go:92] pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:41.486003   62747 pod_ready.go:81] duration metric: took 7.961029ms for pod "kube-apiserver-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.486015   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.491586   62747 pod_ready.go:92] pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:41.491608   62747 pod_ready.go:81] duration metric: took 5.584682ms for pod "kube-controller-manager-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.491619   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p4rh9" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:41.591874   62747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:05:41.630528   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 01:05:41.630554   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 01:05:41.653822   62747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:05:41.718742   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 01:05:41.718775   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 01:05:41.750701   62747 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:41.750725   62747 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 01:05:41.798873   62747 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:05:41.961373   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:41.961415   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:41.961857   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:41.961879   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:41.961890   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:41.961909   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:41.962200   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:41.962205   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:41.962216   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:41.974163   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:41.974189   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:41.974517   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:41.974537   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:42.721070   62747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.067206266s)
	I0416 01:05:42.721119   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:42.721130   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:42.721551   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:42.721594   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:42.721613   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:42.721636   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:42.721648   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:42.721972   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:42.721987   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:42.722006   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:43.123544   62747 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.324616723s)
	I0416 01:05:43.123593   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:43.123608   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:43.123867   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:43.123906   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:43.123913   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:43.123922   62747 main.go:141] libmachine: Making call to close driver server
	I0416 01:05:43.123928   62747 main.go:141] libmachine: (embed-certs-617092) Calling .Close
	I0416 01:05:43.124218   62747 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:05:43.124234   62747 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:05:43.124234   62747 main.go:141] libmachine: (embed-certs-617092) DBG | Closing plugin on server side
	I0416 01:05:43.124255   62747 addons.go:470] Verifying addon metrics-server=true in "embed-certs-617092"
	I0416 01:05:43.125829   62747 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0416 01:05:43.127138   62747 addons.go:505] duration metric: took 1.965815007s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0416 01:05:43.536374   62747 pod_ready.go:102] pod "kube-proxy-p4rh9" in "kube-system" namespace has status "Ready":"False"
	I0416 01:05:44.000571   62747 pod_ready.go:92] pod "kube-proxy-p4rh9" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:44.000594   62747 pod_ready.go:81] duration metric: took 2.508967748s for pod "kube-proxy-p4rh9" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:44.000603   62747 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:44.006516   62747 pod_ready.go:92] pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace has status "Ready":"True"
	I0416 01:05:44.006540   62747 pod_ready.go:81] duration metric: took 5.930755ms for pod "kube-scheduler-embed-certs-617092" in "kube-system" namespace to be "Ready" ...
	I0416 01:05:44.006546   62747 pod_ready.go:38] duration metric: took 2.554153393s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:05:44.006560   62747 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:05:44.006612   62747 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:05:44.030705   62747 api_server.go:72] duration metric: took 2.869432993s to wait for apiserver process to appear ...
	I0416 01:05:44.030737   62747 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:05:44.030759   62747 api_server.go:253] Checking apiserver healthz at https://192.168.61.225:8443/healthz ...
	I0416 01:05:44.035576   62747 api_server.go:279] https://192.168.61.225:8443/healthz returned 200:
	ok
	I0416 01:05:44.037948   62747 api_server.go:141] control plane version: v1.29.3
	I0416 01:05:44.037973   62747 api_server.go:131] duration metric: took 7.228106ms to wait for apiserver health ...
	I0416 01:05:44.037983   62747 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:05:44.044543   62747 system_pods.go:59] 9 kube-system pods found
	I0416 01:05:44.044574   62747 system_pods.go:61] "coredns-76f75df574-2q58l" [e9b9d000-738b-4110-8757-17f76197285c] Running
	I0416 01:05:44.044581   62747 system_pods.go:61] "coredns-76f75df574-h8k4k" [1b114848-1137-4215-a966-03db39e4de23] Running
	I0416 01:05:44.044586   62747 system_pods.go:61] "etcd-embed-certs-617092" [f65e9307-4e12-4ac4-baca-7e1cfd7415d5] Running
	I0416 01:05:44.044591   62747 system_pods.go:61] "kube-apiserver-embed-certs-617092" [f55e02ce-45cf-4f6e-b8d7-7f305f22ea52] Running
	I0416 01:05:44.044596   62747 system_pods.go:61] "kube-controller-manager-embed-certs-617092" [d16739c1-36f4-4748-8533-fcc6cea0adee] Running
	I0416 01:05:44.044601   62747 system_pods.go:61] "kube-proxy-p4rh9" [42041028-d085-4ec4-8213-da3af0d5290e] Running
	I0416 01:05:44.044606   62747 system_pods.go:61] "kube-scheduler-embed-certs-617092" [d61e24fe-a5e3-41bf-b212-75764a036a26] Running
	I0416 01:05:44.044614   62747 system_pods.go:61] "metrics-server-57f55c9bc5-j5clp" [99808b2d-344f-43b7-a29c-01f0a2026aa8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:44.044623   62747 system_pods.go:61] "storage-provisioner" [5a62c0f7-0b15-48f3-9c17-d5966d39fbd5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0416 01:05:44.044635   62747 system_pods.go:74] duration metric: took 6.6454ms to wait for pod list to return data ...
	I0416 01:05:44.044652   62747 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:05:44.241344   62747 default_sa.go:45] found service account: "default"
	I0416 01:05:44.241370   62747 default_sa.go:55] duration metric: took 196.710973ms for default service account to be created ...
	I0416 01:05:44.241379   62747 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:05:44.450798   62747 system_pods.go:86] 9 kube-system pods found
	I0416 01:05:44.450825   62747 system_pods.go:89] "coredns-76f75df574-2q58l" [e9b9d000-738b-4110-8757-17f76197285c] Running
	I0416 01:05:44.450831   62747 system_pods.go:89] "coredns-76f75df574-h8k4k" [1b114848-1137-4215-a966-03db39e4de23] Running
	I0416 01:05:44.450835   62747 system_pods.go:89] "etcd-embed-certs-617092" [f65e9307-4e12-4ac4-baca-7e1cfd7415d5] Running
	I0416 01:05:44.450839   62747 system_pods.go:89] "kube-apiserver-embed-certs-617092" [f55e02ce-45cf-4f6e-b8d7-7f305f22ea52] Running
	I0416 01:05:44.450844   62747 system_pods.go:89] "kube-controller-manager-embed-certs-617092" [d16739c1-36f4-4748-8533-fcc6cea0adee] Running
	I0416 01:05:44.450848   62747 system_pods.go:89] "kube-proxy-p4rh9" [42041028-d085-4ec4-8213-da3af0d5290e] Running
	I0416 01:05:44.450851   62747 system_pods.go:89] "kube-scheduler-embed-certs-617092" [d61e24fe-a5e3-41bf-b212-75764a036a26] Running
	I0416 01:05:44.450858   62747 system_pods.go:89] "metrics-server-57f55c9bc5-j5clp" [99808b2d-344f-43b7-a29c-01f0a2026aa8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:05:44.450864   62747 system_pods.go:89] "storage-provisioner" [5a62c0f7-0b15-48f3-9c17-d5966d39fbd5] Running
	I0416 01:05:44.450871   62747 system_pods.go:126] duration metric: took 209.487599ms to wait for k8s-apps to be running ...
	I0416 01:05:44.450889   62747 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:05:44.450943   62747 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:05:44.470820   62747 system_svc.go:56] duration metric: took 19.925743ms WaitForService to wait for kubelet
	I0416 01:05:44.470853   62747 kubeadm.go:576] duration metric: took 3.309585995s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:05:44.470876   62747 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:05:44.642093   62747 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:05:44.642123   62747 node_conditions.go:123] node cpu capacity is 2
	I0416 01:05:44.642135   62747 node_conditions.go:105] duration metric: took 171.253415ms to run NodePressure ...
	I0416 01:05:44.642149   62747 start.go:240] waiting for startup goroutines ...
	I0416 01:05:44.642158   62747 start.go:245] waiting for cluster config update ...
	I0416 01:05:44.642171   62747 start.go:254] writing updated cluster config ...
	I0416 01:05:44.642519   62747 ssh_runner.go:195] Run: rm -f paused
	I0416 01:05:44.707141   62747 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 01:05:44.709274   62747 out.go:177] * Done! kubectl is now configured to use "embed-certs-617092" cluster and "default" namespace by default
	I0416 01:05:48.372574   61267 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.002543 seconds
	I0416 01:05:48.385076   61267 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0416 01:05:48.406058   61267 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0416 01:05:48.938329   61267 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0416 01:05:48.938556   61267 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-653942 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0416 01:05:49.458321   61267 kubeadm.go:309] [bootstrap-token] Using token: 5ddaoe.tvzldvzlkbeta1a9
	I0416 01:05:49.459891   61267 out.go:204]   - Configuring RBAC rules ...
	I0416 01:05:49.460064   61267 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0416 01:05:49.465799   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0416 01:05:49.477346   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0416 01:05:49.482154   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0416 01:05:49.485769   61267 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0416 01:05:49.489199   61267 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0416 01:05:49.504774   61267 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0416 01:05:49.770133   61267 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0416 01:05:49.872777   61267 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0416 01:05:49.874282   61267 kubeadm.go:309] 
	I0416 01:05:49.874384   61267 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0416 01:05:49.874400   61267 kubeadm.go:309] 
	I0416 01:05:49.874560   61267 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0416 01:05:49.874580   61267 kubeadm.go:309] 
	I0416 01:05:49.874602   61267 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0416 01:05:49.874673   61267 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0416 01:05:49.874754   61267 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0416 01:05:49.874766   61267 kubeadm.go:309] 
	I0416 01:05:49.874853   61267 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0416 01:05:49.874878   61267 kubeadm.go:309] 
	I0416 01:05:49.874944   61267 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0416 01:05:49.874956   61267 kubeadm.go:309] 
	I0416 01:05:49.875019   61267 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0416 01:05:49.875141   61267 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0416 01:05:49.875246   61267 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0416 01:05:49.875257   61267 kubeadm.go:309] 
	I0416 01:05:49.875432   61267 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0416 01:05:49.875552   61267 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0416 01:05:49.875562   61267 kubeadm.go:309] 
	I0416 01:05:49.875657   61267 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token 5ddaoe.tvzldvzlkbeta1a9 \
	I0416 01:05:49.875754   61267 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde \
	I0416 01:05:49.875774   61267 kubeadm.go:309] 	--control-plane 
	I0416 01:05:49.875780   61267 kubeadm.go:309] 
	I0416 01:05:49.875859   61267 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0416 01:05:49.875869   61267 kubeadm.go:309] 
	I0416 01:05:49.875949   61267 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token 5ddaoe.tvzldvzlkbeta1a9 \
	I0416 01:05:49.876085   61267 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:b25013486716312e69a135916990864bd549ec06035b8322dd39250241b50fde 
	I0416 01:05:49.876640   61267 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:05:49.876666   61267 cni.go:84] Creating CNI manager for ""
	I0416 01:05:49.876676   61267 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0416 01:05:49.878703   61267 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0416 01:05:49.880070   61267 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0416 01:05:49.897752   61267 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0416 01:05:49.969146   61267 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0416 01:05:49.969228   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:49.969228   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-653942 minikube.k8s.io/updated_at=2024_04_16T01_05_49_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=e98a202b00bb48daab8bbee2f0c7bcf1556f8388 minikube.k8s.io/name=default-k8s-diff-port-653942 minikube.k8s.io/primary=true
	I0416 01:05:50.233119   61267 ops.go:34] apiserver oom_adj: -16
	I0416 01:05:50.233262   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:50.733748   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:51.234361   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:51.733704   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:52.233367   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:52.733789   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:53.234012   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:53.733458   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:54.233341   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:54.734148   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:55.233710   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:55.734135   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:56.233315   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:56.734162   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:57.233899   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:57.733337   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:58.234101   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:58.734357   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:59.233831   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:05:59.733286   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:00.233847   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:00.733872   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:01.233935   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:01.733629   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:02.233967   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:02.734163   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:03.233294   61267 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0416 01:06:03.412834   61267 kubeadm.go:1107] duration metric: took 13.44368469s to wait for elevateKubeSystemPrivileges
	W0416 01:06:03.412896   61267 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0416 01:06:03.412907   61267 kubeadm.go:393] duration metric: took 5m17.8108087s to StartCluster
	I0416 01:06:03.412926   61267 settings.go:142] acquiring lock: {Name:mk6e42a297b4f7bfb79727f203ae36d752cbb6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:06:03.413003   61267 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 01:06:03.414974   61267 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/kubeconfig: {Name:mkbb3b028de7d57df8335e83f6dfa1b0eacb2fb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0416 01:06:03.415299   61267 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.216 Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0416 01:06:03.417148   61267 out.go:177] * Verifying Kubernetes components...
	I0416 01:06:03.415390   61267 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0416 01:06:03.415510   61267 config.go:182] Loaded profile config "default-k8s-diff-port-653942": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 01:06:03.417238   61267 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-653942"
	I0416 01:06:03.419134   61267 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-653942"
	W0416 01:06:03.419147   61267 addons.go:243] addon storage-provisioner should already be in state true
	I0416 01:06:03.417247   61267 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-653942"
	I0416 01:06:03.419188   61267 host.go:66] Checking if "default-k8s-diff-port-653942" exists ...
	I0416 01:06:03.419214   61267 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-653942"
	I0416 01:06:03.417245   61267 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-653942"
	I0416 01:06:03.419095   61267 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	W0416 01:06:03.419262   61267 addons.go:243] addon metrics-server should already be in state true
	I0416 01:06:03.419307   61267 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-653942"
	I0416 01:06:03.419327   61267 host.go:66] Checking if "default-k8s-diff-port-653942" exists ...
	I0416 01:06:03.419606   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.419644   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.419662   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.419698   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.419722   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.419756   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.435784   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44663
	I0416 01:06:03.435800   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37505
	I0416 01:06:03.436294   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.436296   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.436811   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.436838   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.437097   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.437115   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.437203   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.437683   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.437757   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.437790   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.438213   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33329
	I0416 01:06:03.438248   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.438273   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.438786   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.439301   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.439332   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.439810   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.440162   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.443879   61267 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-653942"
	W0416 01:06:03.443906   61267 addons.go:243] addon default-storageclass should already be in state true
	I0416 01:06:03.443941   61267 host.go:66] Checking if "default-k8s-diff-port-653942" exists ...
	I0416 01:06:03.444301   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.444340   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.454673   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I0416 01:06:03.455111   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.455715   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.455742   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.456116   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.456318   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.457870   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39341
	I0416 01:06:03.458086   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:06:03.458278   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.462516   61267 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0416 01:06:03.458862   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.460354   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43753
	I0416 01:06:03.464491   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0416 01:06:03.464509   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0416 01:06:03.464529   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:06:03.464551   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.464960   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.465281   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.465552   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.466181   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.466205   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.466760   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.467410   61267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 01:06:03.467435   61267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 01:06:03.467638   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:06:03.469647   61267 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0416 01:06:03.471009   61267 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:06:03.471024   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0416 01:06:03.469242   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.471040   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:06:03.469767   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:06:03.471070   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:06:03.471133   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.471297   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:06:03.471478   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:06:03.471661   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:06:03.473778   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.474203   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:06:03.474226   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.474421   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:06:03.474605   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:06:03.474784   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:06:03.474958   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:06:03.485829   61267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46571
	I0416 01:06:03.486293   61267 main.go:141] libmachine: () Calling .GetVersion
	I0416 01:06:03.486876   61267 main.go:141] libmachine: Using API Version  1
	I0416 01:06:03.486900   61267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 01:06:03.487362   61267 main.go:141] libmachine: () Calling .GetMachineName
	I0416 01:06:03.487535   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetState
	I0416 01:06:03.489207   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .DriverName
	I0416 01:06:03.489529   61267 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0416 01:06:03.489549   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0416 01:06:03.489568   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHHostname
	I0416 01:06:03.492570   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.492932   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4b:a2:47", ip: ""} in network mk-default-k8s-diff-port-653942: {Iface:virbr4 ExpiryTime:2024-04-16 02:00:32 +0000 UTC Type:0 Mac:52:54:00:4b:a2:47 Iaid: IPaddr:192.168.50.216 Prefix:24 Hostname:default-k8s-diff-port-653942 Clientid:01:52:54:00:4b:a2:47}
	I0416 01:06:03.492958   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | domain default-k8s-diff-port-653942 has defined IP address 192.168.50.216 and MAC address 52:54:00:4b:a2:47 in network mk-default-k8s-diff-port-653942
	I0416 01:06:03.493224   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHPort
	I0416 01:06:03.493379   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHKeyPath
	I0416 01:06:03.493557   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .GetSSHUsername
	I0416 01:06:03.493673   61267 sshutil.go:53] new ssh client: &{IP:192.168.50.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/default-k8s-diff-port-653942/id_rsa Username:docker}
	I0416 01:06:03.680085   61267 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0416 01:06:03.724011   61267 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-653942" to be "Ready" ...
	I0416 01:06:03.739131   61267 node_ready.go:49] node "default-k8s-diff-port-653942" has status "Ready":"True"
	I0416 01:06:03.739152   61267 node_ready.go:38] duration metric: took 15.111832ms for node "default-k8s-diff-port-653942" to be "Ready" ...
	I0416 01:06:03.739161   61267 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:06:03.748081   61267 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-5nnpv" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:03.810063   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0416 01:06:03.810090   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0416 01:06:03.812595   61267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0416 01:06:03.848165   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0416 01:06:03.848187   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0416 01:06:03.991110   61267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0416 01:06:03.997100   61267 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:06:03.997133   61267 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0416 01:06:04.093267   61267 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0416 01:06:04.349978   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:04.350011   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:04.350336   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:04.350396   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:04.350415   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:04.350420   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:04.350425   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:04.350683   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:04.350699   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:04.416648   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:04.416674   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:04.416982   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:04.417001   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.206973   61267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.113663167s)
	I0416 01:06:05.207025   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207040   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207039   61267 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.215892308s)
	I0416 01:06:05.207078   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207090   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207371   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.207388   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.207397   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207405   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207445   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.207462   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:05.207466   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.207490   61267 main.go:141] libmachine: Making call to close driver server
	I0416 01:06:05.207508   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) Calling .Close
	I0416 01:06:05.207610   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.207644   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.207654   61267 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-653942"
	I0416 01:06:05.207654   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:05.209411   61267 main.go:141] libmachine: (default-k8s-diff-port-653942) DBG | Closing plugin on server side
	I0416 01:06:05.209402   61267 main.go:141] libmachine: Successfully made call to close driver server
	I0416 01:06:05.209469   61267 main.go:141] libmachine: Making call to close connection to plugin binary
	I0416 01:06:05.212071   61267 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0416 01:06:05.213412   61267 addons.go:505] duration metric: took 1.798038731s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0416 01:06:05.256497   61267 pod_ready.go:92] pod "coredns-76f75df574-5nnpv" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.256526   61267 pod_ready.go:81] duration metric: took 1.508419977s for pod "coredns-76f75df574-5nnpv" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.256538   61267 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-zpnhs" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.262092   61267 pod_ready.go:92] pod "coredns-76f75df574-zpnhs" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.262112   61267 pod_ready.go:81] duration metric: took 5.566499ms for pod "coredns-76f75df574-zpnhs" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.262121   61267 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.267256   61267 pod_ready.go:92] pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.267278   61267 pod_ready.go:81] duration metric: took 5.149782ms for pod "etcd-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.267286   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.272119   61267 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.272144   61267 pod_ready.go:81] duration metric: took 4.851008ms for pod "kube-apiserver-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.272155   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.328440   61267 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.328470   61267 pod_ready.go:81] duration metric: took 56.30531ms for pod "kube-controller-manager-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.328482   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mg5km" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.729518   61267 pod_ready.go:92] pod "kube-proxy-mg5km" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:05.729544   61267 pod_ready.go:81] duration metric: took 401.055058ms for pod "kube-proxy-mg5km" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:05.729553   61267 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:06.127535   61267 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace has status "Ready":"True"
	I0416 01:06:06.127558   61267 pod_ready.go:81] duration metric: took 397.998988ms for pod "kube-scheduler-default-k8s-diff-port-653942" in "kube-system" namespace to be "Ready" ...
	I0416 01:06:06.127565   61267 pod_ready.go:38] duration metric: took 2.388395448s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0416 01:06:06.127577   61267 api_server.go:52] waiting for apiserver process to appear ...
	I0416 01:06:06.127620   61267 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 01:06:06.150179   61267 api_server.go:72] duration metric: took 2.734842767s to wait for apiserver process to appear ...
	I0416 01:06:06.150208   61267 api_server.go:88] waiting for apiserver healthz status ...
	I0416 01:06:06.150226   61267 api_server.go:253] Checking apiserver healthz at https://192.168.50.216:8444/healthz ...
	I0416 01:06:06.154310   61267 api_server.go:279] https://192.168.50.216:8444/healthz returned 200:
	ok
	I0416 01:06:06.155393   61267 api_server.go:141] control plane version: v1.29.3
	I0416 01:06:06.155409   61267 api_server.go:131] duration metric: took 5.194458ms to wait for apiserver health ...
	I0416 01:06:06.155421   61267 system_pods.go:43] waiting for kube-system pods to appear ...
	I0416 01:06:06.333873   61267 system_pods.go:59] 9 kube-system pods found
	I0416 01:06:06.333909   61267 system_pods.go:61] "coredns-76f75df574-5nnpv" [3350aca5-639e-44a1-bd84-d1e4b6486143] Running
	I0416 01:06:06.333914   61267 system_pods.go:61] "coredns-76f75df574-zpnhs" [990672b6-bb3a-4f91-8de7-7c2ec224c94a] Running
	I0416 01:06:06.333917   61267 system_pods.go:61] "etcd-default-k8s-diff-port-653942" [e72e89e9-c274-4d4d-b1f9-43bea95cd015] Running
	I0416 01:06:06.333920   61267 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-653942" [c1652126-b4c2-41cf-a574-9784f7800374] Running
	I0416 01:06:06.333923   61267 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-653942" [1f43936c-ba39-44f9-b9b7-2a149f26a880] Running
	I0416 01:06:06.333926   61267 system_pods.go:61] "kube-proxy-mg5km" [74764194-1f31-40b1-90b5-497e248ab7da] Running
	I0416 01:06:06.333929   61267 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-653942" [48058ade-c30d-4dc9-b6c0-32b2ed5fc88a] Running
	I0416 01:06:06.333935   61267 system_pods.go:61] "metrics-server-57f55c9bc5-6jn29" [1eec2ffb-ce59-45cb-b6b4-cd010549510e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:06:06.333938   61267 system_pods.go:61] "storage-provisioner" [d131c1fc-9124-4b46-a16f-a8fb5029a57b] Running
	I0416 01:06:06.333947   61267 system_pods.go:74] duration metric: took 178.520515ms to wait for pod list to return data ...
	I0416 01:06:06.333953   61267 default_sa.go:34] waiting for default service account to be created ...
	I0416 01:06:06.528119   61267 default_sa.go:45] found service account: "default"
	I0416 01:06:06.528148   61267 default_sa.go:55] duration metric: took 194.18786ms for default service account to be created ...
	I0416 01:06:06.528158   61267 system_pods.go:116] waiting for k8s-apps to be running ...
	I0416 01:06:06.731573   61267 system_pods.go:86] 9 kube-system pods found
	I0416 01:06:06.731600   61267 system_pods.go:89] "coredns-76f75df574-5nnpv" [3350aca5-639e-44a1-bd84-d1e4b6486143] Running
	I0416 01:06:06.731606   61267 system_pods.go:89] "coredns-76f75df574-zpnhs" [990672b6-bb3a-4f91-8de7-7c2ec224c94a] Running
	I0416 01:06:06.731610   61267 system_pods.go:89] "etcd-default-k8s-diff-port-653942" [e72e89e9-c274-4d4d-b1f9-43bea95cd015] Running
	I0416 01:06:06.731614   61267 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-653942" [c1652126-b4c2-41cf-a574-9784f7800374] Running
	I0416 01:06:06.731619   61267 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-653942" [1f43936c-ba39-44f9-b9b7-2a149f26a880] Running
	I0416 01:06:06.731622   61267 system_pods.go:89] "kube-proxy-mg5km" [74764194-1f31-40b1-90b5-497e248ab7da] Running
	I0416 01:06:06.731626   61267 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-653942" [48058ade-c30d-4dc9-b6c0-32b2ed5fc88a] Running
	I0416 01:06:06.731633   61267 system_pods.go:89] "metrics-server-57f55c9bc5-6jn29" [1eec2ffb-ce59-45cb-b6b4-cd010549510e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0416 01:06:06.731638   61267 system_pods.go:89] "storage-provisioner" [d131c1fc-9124-4b46-a16f-a8fb5029a57b] Running
	I0416 01:06:06.731649   61267 system_pods.go:126] duration metric: took 203.485273ms to wait for k8s-apps to be running ...
	I0416 01:06:06.731659   61267 system_svc.go:44] waiting for kubelet service to be running ....
	I0416 01:06:06.731700   61267 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:06:06.749013   61267 system_svc.go:56] duration metric: took 17.343008ms WaitForService to wait for kubelet
	I0416 01:06:06.749048   61267 kubeadm.go:576] duration metric: took 3.333716529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0416 01:06:06.749072   61267 node_conditions.go:102] verifying NodePressure condition ...
	I0416 01:06:06.927701   61267 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0416 01:06:06.927725   61267 node_conditions.go:123] node cpu capacity is 2
	I0416 01:06:06.927735   61267 node_conditions.go:105] duration metric: took 178.65899ms to run NodePressure ...
	I0416 01:06:06.927746   61267 start.go:240] waiting for startup goroutines ...
	I0416 01:06:06.927754   61267 start.go:245] waiting for cluster config update ...
	I0416 01:06:06.927763   61267 start.go:254] writing updated cluster config ...
	I0416 01:06:06.928000   61267 ssh_runner.go:195] Run: rm -f paused
	I0416 01:06:06.978823   61267 start.go:600] kubectl: 1.29.3, cluster: 1.29.3 (minor skew: 0)
	I0416 01:06:06.981011   61267 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-653942" cluster and "default" namespace by default
	I0416 01:06:14.261576   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:06:14.261834   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:06:14.261849   62139 kubeadm.go:309] 
	I0416 01:06:14.261890   62139 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 01:06:14.261973   62139 kubeadm.go:309] 		timed out waiting for the condition
	I0416 01:06:14.262006   62139 kubeadm.go:309] 
	I0416 01:06:14.262051   62139 kubeadm.go:309] 	This error is likely caused by:
	I0416 01:06:14.262082   62139 kubeadm.go:309] 		- The kubelet is not running
	I0416 01:06:14.262174   62139 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 01:06:14.262199   62139 kubeadm.go:309] 
	I0416 01:06:14.262357   62139 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 01:06:14.262414   62139 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 01:06:14.262471   62139 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 01:06:14.262481   62139 kubeadm.go:309] 
	I0416 01:06:14.262610   62139 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 01:06:14.262707   62139 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 01:06:14.262717   62139 kubeadm.go:309] 
	I0416 01:06:14.262867   62139 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 01:06:14.263010   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 01:06:14.263142   62139 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 01:06:14.263211   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 01:06:14.263234   62139 kubeadm.go:309] 
	I0416 01:06:14.264084   62139 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:06:14.264204   62139 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 01:06:14.264312   62139 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0416 01:06:14.264460   62139 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0416 01:06:14.264526   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0416 01:06:15.653692   62139 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.389136497s)
	I0416 01:06:15.653831   62139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 01:06:15.669141   62139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0416 01:06:15.679485   62139 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0416 01:06:15.679511   62139 kubeadm.go:156] found existing configuration files:
	
	I0416 01:06:15.679556   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0416 01:06:15.689898   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0416 01:06:15.689974   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0416 01:06:15.700563   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0416 01:06:15.710363   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0416 01:06:15.710445   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0416 01:06:15.719877   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0416 01:06:15.728947   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0416 01:06:15.729002   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0416 01:06:15.739360   62139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0416 01:06:15.749479   62139 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0416 01:06:15.749557   62139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0416 01:06:15.760930   62139 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0416 01:06:16.000974   62139 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0416 01:08:12.327133   62139 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0416 01:08:12.327246   62139 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0416 01:08:12.328995   62139 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0416 01:08:12.329092   62139 kubeadm.go:309] [preflight] Running pre-flight checks
	I0416 01:08:12.329220   62139 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0416 01:08:12.329302   62139 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0416 01:08:12.329440   62139 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0416 01:08:12.329537   62139 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0416 01:08:12.331381   62139 out.go:204]   - Generating certificates and keys ...
	I0416 01:08:12.331474   62139 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0416 01:08:12.331558   62139 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0416 01:08:12.331658   62139 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0416 01:08:12.331742   62139 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0416 01:08:12.331830   62139 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0416 01:08:12.331910   62139 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0416 01:08:12.331968   62139 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0416 01:08:12.332020   62139 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0416 01:08:12.332085   62139 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0416 01:08:12.332159   62139 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0416 01:08:12.332210   62139 kubeadm.go:309] [certs] Using the existing "sa" key
	I0416 01:08:12.332297   62139 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0416 01:08:12.332376   62139 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0416 01:08:12.332466   62139 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0416 01:08:12.332547   62139 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0416 01:08:12.332642   62139 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0416 01:08:12.332790   62139 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0416 01:08:12.332895   62139 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0416 01:08:12.332938   62139 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0416 01:08:12.333002   62139 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0416 01:08:12.334632   62139 out.go:204]   - Booting up control plane ...
	I0416 01:08:12.334737   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0416 01:08:12.334837   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0416 01:08:12.334928   62139 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0416 01:08:12.335009   62139 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0416 01:08:12.335162   62139 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0416 01:08:12.335241   62139 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0416 01:08:12.335333   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.335541   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.335613   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.335771   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.335848   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336035   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336109   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336365   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336438   62139 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0416 01:08:12.336704   62139 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0416 01:08:12.336716   62139 kubeadm.go:309] 
	I0416 01:08:12.336779   62139 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0416 01:08:12.336827   62139 kubeadm.go:309] 		timed out waiting for the condition
	I0416 01:08:12.336834   62139 kubeadm.go:309] 
	I0416 01:08:12.336883   62139 kubeadm.go:309] 	This error is likely caused by:
	I0416 01:08:12.336922   62139 kubeadm.go:309] 		- The kubelet is not running
	I0416 01:08:12.337025   62139 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0416 01:08:12.337036   62139 kubeadm.go:309] 
	I0416 01:08:12.337145   62139 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0416 01:08:12.337211   62139 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0416 01:08:12.337245   62139 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0416 01:08:12.337253   62139 kubeadm.go:309] 
	I0416 01:08:12.337340   62139 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0416 01:08:12.337428   62139 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0416 01:08:12.337436   62139 kubeadm.go:309] 
	I0416 01:08:12.337529   62139 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0416 01:08:12.337602   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0416 01:08:12.337701   62139 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0416 01:08:12.337870   62139 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0416 01:08:12.337957   62139 kubeadm.go:393] duration metric: took 8m4.174818047s to StartCluster
	I0416 01:08:12.337969   62139 kubeadm.go:309] 
	I0416 01:08:12.338009   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0416 01:08:12.338067   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0416 01:08:12.391937   62139 cri.go:89] found id: ""
	I0416 01:08:12.391963   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.391986   62139 logs.go:278] No container was found matching "kube-apiserver"
	I0416 01:08:12.391994   62139 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0416 01:08:12.392072   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0416 01:08:12.430575   62139 cri.go:89] found id: ""
	I0416 01:08:12.430602   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.430616   62139 logs.go:278] No container was found matching "etcd"
	I0416 01:08:12.430623   62139 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0416 01:08:12.430685   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0416 01:08:12.469115   62139 cri.go:89] found id: ""
	I0416 01:08:12.469143   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.469152   62139 logs.go:278] No container was found matching "coredns"
	I0416 01:08:12.469173   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0416 01:08:12.469228   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0416 01:08:12.508599   62139 cri.go:89] found id: ""
	I0416 01:08:12.508630   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.508640   62139 logs.go:278] No container was found matching "kube-scheduler"
	I0416 01:08:12.508648   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0416 01:08:12.508698   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0416 01:08:12.547785   62139 cri.go:89] found id: ""
	I0416 01:08:12.547817   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.547829   62139 logs.go:278] No container was found matching "kube-proxy"
	I0416 01:08:12.547836   62139 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0416 01:08:12.547910   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0416 01:08:12.599526   62139 cri.go:89] found id: ""
	I0416 01:08:12.599549   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.599557   62139 logs.go:278] No container was found matching "kube-controller-manager"
	I0416 01:08:12.599563   62139 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0416 01:08:12.599612   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0416 01:08:12.639914   62139 cri.go:89] found id: ""
	I0416 01:08:12.639944   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.639954   62139 logs.go:278] No container was found matching "kindnet"
	I0416 01:08:12.639962   62139 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0416 01:08:12.640041   62139 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0416 01:08:12.676025   62139 cri.go:89] found id: ""
	I0416 01:08:12.676057   62139 logs.go:276] 0 containers: []
	W0416 01:08:12.676066   62139 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0416 01:08:12.676079   62139 logs.go:123] Gathering logs for describe nodes ...
	I0416 01:08:12.676100   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0416 01:08:12.774744   62139 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0416 01:08:12.774769   62139 logs.go:123] Gathering logs for CRI-O ...
	I0416 01:08:12.774785   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0416 01:08:12.902751   62139 logs.go:123] Gathering logs for container status ...
	I0416 01:08:12.902787   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0416 01:08:12.947370   62139 logs.go:123] Gathering logs for kubelet ...
	I0416 01:08:12.947406   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0416 01:08:13.002186   62139 logs.go:123] Gathering logs for dmesg ...
	I0416 01:08:13.002223   62139 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0416 01:08:13.017193   62139 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0416 01:08:13.017234   62139 out.go:239] * 
	W0416 01:08:13.017283   62139 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 01:08:13.017304   62139 out.go:239] * 
	W0416 01:08:13.018151   62139 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0416 01:08:13.021371   62139 out.go:177] 
	W0416 01:08:13.022572   62139 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0416 01:08:13.022640   62139 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0416 01:08:13.022670   62139 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0416 01:08:13.024248   62139 out.go:177] 
	
	
	==> CRI-O <==
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.319921361Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230416319887526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e9caa252-caa0-4a34-9622-104abac5783b name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.320483473Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=012e73b5-8df1-437c-927b-5a62d28a84f0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.320560473Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=012e73b5-8df1-437c-927b-5a62d28a84f0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.320602751Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=012e73b5-8df1-437c-927b-5a62d28a84f0 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.352925087Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87a4e741-884d-4b2c-a5c4-d7197968c788 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.353035569Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87a4e741-884d-4b2c-a5c4-d7197968c788 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.354570368Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2662ef86-09d7-49d2-9fc0-364195f7f006 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.355057432Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230416355018998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2662ef86-09d7-49d2-9fc0-364195f7f006 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.355868786Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5091eb2a-6dd9-41e2-9eff-31a3a1ce3c22 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.355928100Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5091eb2a-6dd9-41e2-9eff-31a3a1ce3c22 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.355965540Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5091eb2a-6dd9-41e2-9eff-31a3a1ce3c22 name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.388357494Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0ff032c-74f3-4eb7-8124-640f7191ee98 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.388448039Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0ff032c-74f3-4eb7-8124-640f7191ee98 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.389537827Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3858266a-9ceb-4f12-bcf5-b33acc48ff20 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.390010048Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230416389975378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3858266a-9ceb-4f12-bcf5-b33acc48ff20 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.390692329Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=66b1765b-f54e-40dd-8626-d8ea8fc192ca name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.390744139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=66b1765b-f54e-40dd-8626-d8ea8fc192ca name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.390832307Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=66b1765b-f54e-40dd-8626-d8ea8fc192ca name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.424616016Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a8d8bab2-25ba-45da-ae0b-e29765a2d124 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.424836716Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a8d8bab2-25ba-45da-ae0b-e29765a2d124 name=/runtime.v1.RuntimeService/Version
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.426166556Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8951c41e-2d95-46a3-9a82-1acde38672e6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.426645981Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1713230416426617116,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8951c41e-2d95-46a3-9a82-1acde38672e6 name=/runtime.v1.ImageService/ImageFsInfo
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.427403972Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b9751c33-594a-4893-89f6-19cadf836cae name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.427520960Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b9751c33-594a-4893-89f6-19cadf836cae name=/runtime.v1.RuntimeService/ListContainers
	Apr 16 01:20:16 old-k8s-version-800769 crio[651]: time="2024-04-16 01:20:16.427582439Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b9751c33-594a-4893-89f6-19cadf836cae name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Apr16 00:59] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052487] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041260] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.659381] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.701128] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.498139] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.532362] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.139625] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.184218] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.154369] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[Apr16 01:00] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.904893] systemd-fstab-generator[836]: Ignoring "noauto" option for root device
	[  +0.058661] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.131661] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[ +13.736441] kauditd_printk_skb: 46 callbacks suppressed
	[Apr16 01:04] systemd-fstab-generator[5023]: Ignoring "noauto" option for root device
	[Apr16 01:06] systemd-fstab-generator[5299]: Ignoring "noauto" option for root device
	[  +0.072728] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 01:20:16 up 20 min,  0 users,  load average: 0.10, 0.05, 0.01
	Linux old-k8s-version-800769 5.10.207 #1 SMP Mon Apr 15 15:01:07 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Apr 16 01:20:11 old-k8s-version-800769 kubelet[6846]: k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc0000245a0, 0x4f7fe00, 0xc000120018, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Apr 16 01:20:11 old-k8s-version-800769 kubelet[6846]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/rest/request.go:964 +0xf1
	Apr 16 01:20:11 old-k8s-version-800769 kubelet[6846]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.NewFilteredListWatchFromClient.func1(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x48aa087, ...)
	Apr 16 01:20:11 old-k8s-version-800769 kubelet[6846]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/listwatch.go:87 +0x1e5
	Apr 16 01:20:11 old-k8s-version-800769 kubelet[6846]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*ListWatch).List(0xc000b93580, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Apr 16 01:20:11 old-k8s-version-800769 kubelet[6846]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/listwatch.go:106 +0x78
	Apr 16 01:20:11 old-k8s-version-800769 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Apr 16 01:20:11 old-k8s-version-800769 kubelet[6846]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1.1.2(0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x48aa087, ...)
	Apr 16 01:20:11 old-k8s-version-800769 kubelet[6846]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:277 +0x75
	Apr 16 01:20:11 old-k8s-version-800769 kubelet[6846]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/pager.SimplePageFunc.func1(0x4f7fe00, 0xc000120010, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Apr 16 01:20:11 old-k8s-version-800769 kubelet[6846]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/pager/pager.go:40 +0x64
	Apr 16 01:20:11 old-k8s-version-800769 kubelet[6846]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/pager.(*ListPager).List(0xc000ca5e60, 0x4f7fe00, 0xc000120010, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Apr 16 01:20:11 old-k8s-version-800769 kubelet[6846]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/pager/pager.go:91 +0x179
	Apr 16 01:20:11 old-k8s-version-800769 kubelet[6846]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1.1(0xc000c54d80, 0xc0000d8620, 0xc000b917a0, 0xc000775f80, 0xc000be8b28, 0xc000775f90, 0xc000c3c900)
	Apr 16 01:20:11 old-k8s-version-800769 kubelet[6846]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:302 +0x1a5
	Apr 16 01:20:11 old-k8s-version-800769 kubelet[6846]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1
	Apr 16 01:20:11 old-k8s-version-800769 kubelet[6846]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:268 +0x295
	Apr 16 01:20:11 old-k8s-version-800769 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 145.
	Apr 16 01:20:11 old-k8s-version-800769 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 16 01:20:11 old-k8s-version-800769 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Apr 16 01:20:12 old-k8s-version-800769 kubelet[6856]: I0416 01:20:12.074347    6856 server.go:416] Version: v1.20.0
	Apr 16 01:20:12 old-k8s-version-800769 kubelet[6856]: I0416 01:20:12.074648    6856 server.go:837] Client rotation is on, will bootstrap in background
	Apr 16 01:20:12 old-k8s-version-800769 kubelet[6856]: I0416 01:20:12.076704    6856 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Apr 16 01:20:12 old-k8s-version-800769 kubelet[6856]: W0416 01:20:12.077660    6856 manager.go:159] Cannot detect current cgroup on cgroup v2
	Apr 16 01:20:12 old-k8s-version-800769 kubelet[6856]: I0416 01:20:12.077971    6856 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-800769 -n old-k8s-version-800769
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-800769 -n old-k8s-version-800769: exit status 2 (248.3452ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-800769" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (178.20s)

                                                
                                    

Test pass (258/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 31.68
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.17
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.29.3/json-events 14
13 TestDownloadOnly/v1.29.3/preload-exists 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.09
18 TestDownloadOnly/v1.29.3/DeleteAll 0.18
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.17
21 TestDownloadOnly/v1.30.0-rc.2/json-events 11.56
22 TestDownloadOnly/v1.30.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.30.0-rc.2/LogsDuration 0.09
27 TestDownloadOnly/v1.30.0-rc.2/DeleteAll 0.17
28 TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds 0.16
30 TestBinaryMirror 0.64
31 TestOffline 102.27
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
36 TestAddons/Setup 222.9
38 TestAddons/parallel/Registry 19.29
40 TestAddons/parallel/InspektorGadget 12.64
41 TestAddons/parallel/MetricsServer 6.12
42 TestAddons/parallel/HelmTiller 17.8
44 TestAddons/parallel/CSI 68.05
45 TestAddons/parallel/Headlamp 18.42
46 TestAddons/parallel/CloudSpanner 5.78
47 TestAddons/parallel/LocalPath 24.42
48 TestAddons/parallel/NvidiaDevicePlugin 6.76
49 TestAddons/parallel/Yakd 5.01
52 TestAddons/serial/GCPAuth/Namespaces 0.14
54 TestCertOptions 68.66
55 TestCertExpiration 278.95
57 TestForceSystemdFlag 81.66
58 TestForceSystemdEnv 47.41
60 TestKVMDriverInstallOrUpdate 4.05
64 TestErrorSpam/setup 42.25
65 TestErrorSpam/start 0.35
66 TestErrorSpam/status 0.74
67 TestErrorSpam/pause 1.56
68 TestErrorSpam/unpause 1.59
69 TestErrorSpam/stop 4.68
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 98.25
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 122.57
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.48
81 TestFunctional/serial/CacheCmd/cache/add_local 2.15
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.05
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.67
86 TestFunctional/serial/CacheCmd/cache/delete 0.11
87 TestFunctional/serial/MinikubeKubectlCmd 0.11
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 52.98
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 1.41
92 TestFunctional/serial/LogsFileCmd 1.38
93 TestFunctional/serial/InvalidService 5.51
95 TestFunctional/parallel/ConfigCmd 0.4
96 TestFunctional/parallel/DashboardCmd 22.68
97 TestFunctional/parallel/DryRun 0.31
98 TestFunctional/parallel/InternationalLanguage 0.16
99 TestFunctional/parallel/StatusCmd 1.26
103 TestFunctional/parallel/ServiceCmdConnect 10.53
104 TestFunctional/parallel/AddonsCmd 0.15
105 TestFunctional/parallel/PersistentVolumeClaim 44.13
107 TestFunctional/parallel/SSHCmd 0.44
108 TestFunctional/parallel/CpCmd 1.51
109 TestFunctional/parallel/MySQL 25.95
110 TestFunctional/parallel/FileSync 0.2
111 TestFunctional/parallel/CertSync 1.49
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.48
119 TestFunctional/parallel/License 0.42
120 TestFunctional/parallel/ServiceCmd/DeployApp 11.21
121 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
122 TestFunctional/parallel/MountCmd/any-port 11.74
123 TestFunctional/parallel/ProfileCmd/profile_list 0.29
124 TestFunctional/parallel/ProfileCmd/profile_json_output 0.36
125 TestFunctional/parallel/ServiceCmd/List 0.47
126 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
128 TestFunctional/parallel/MountCmd/specific-port 2.04
129 TestFunctional/parallel/ServiceCmd/Format 0.28
130 TestFunctional/parallel/ServiceCmd/URL 0.28
140 TestFunctional/parallel/Version/short 0.06
141 TestFunctional/parallel/Version/components 0.49
142 TestFunctional/parallel/MountCmd/VerifyCleanup 1.85
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.33
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.37
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
150 TestFunctional/parallel/ImageCommands/ImageBuild 3.37
151 TestFunctional/parallel/ImageCommands/Setup 4.68
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.44
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.17
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.64
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.3
159 TestFunctional/delete_addon-resizer_images 0.06
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.02
165 TestMultiControlPlane/serial/StartCluster 208.19
166 TestMultiControlPlane/serial/DeployApp 7.26
167 TestMultiControlPlane/serial/PingHostFromPods 1.31
168 TestMultiControlPlane/serial/AddWorkerNode 45.61
169 TestMultiControlPlane/serial/NodeLabels 0.07
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.55
171 TestMultiControlPlane/serial/CopyFile 13.35
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.5
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
177 TestMultiControlPlane/serial/DeleteSecondaryNode 17.3
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
180 TestMultiControlPlane/serial/RestartCluster 383.14
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.4
182 TestMultiControlPlane/serial/AddSecondaryNode 72.45
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.56
187 TestJSONOutput/start/Command 55.94
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.77
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.68
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 9.4
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.21
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 86.25
219 TestMountStart/serial/StartWithMountFirst 28.09
220 TestMountStart/serial/VerifyMountFirst 0.39
221 TestMountStart/serial/StartWithMountSecond 27.5
222 TestMountStart/serial/VerifyMountSecond 0.39
223 TestMountStart/serial/DeleteFirst 0.68
224 TestMountStart/serial/VerifyMountPostDelete 0.39
225 TestMountStart/serial/Stop 1.35
226 TestMountStart/serial/RestartStopped 23.3
227 TestMountStart/serial/VerifyMountPostStop 0.38
230 TestMultiNode/serial/FreshStart2Nodes 131.67
231 TestMultiNode/serial/DeployApp2Nodes 5.34
232 TestMultiNode/serial/PingHostFrom2Pods 0.84
233 TestMultiNode/serial/AddNode 40.27
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.22
236 TestMultiNode/serial/CopyFile 7.35
237 TestMultiNode/serial/StopNode 2.33
238 TestMultiNode/serial/StartAfterStop 29.75
240 TestMultiNode/serial/DeleteNode 2.47
242 TestMultiNode/serial/RestartMultiNode 175.29
243 TestMultiNode/serial/ValidateNameConflict 50.29
250 TestScheduledStopUnix 118.8
254 TestRunningBinaryUpgrade 215.87
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 101.62
261 TestNoKubernetes/serial/StartWithStopK8s 23.26
262 TestStoppedBinaryUpgrade/Setup 2.01
263 TestStoppedBinaryUpgrade/Upgrade 105.63
264 TestNoKubernetes/serial/Start 35.76
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
266 TestNoKubernetes/serial/ProfileList 11.12
267 TestNoKubernetes/serial/Stop 1.46
268 TestNoKubernetes/serial/StartNoArgs 32.84
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
277 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
285 TestNetworkPlugins/group/false 3.49
290 TestPause/serial/Start 87.02
291 TestPause/serial/SecondStartNoReconfiguration 50.68
295 TestStartStop/group/no-preload/serial/FirstStart 92.92
296 TestPause/serial/Pause 0.71
297 TestPause/serial/VerifyStatus 0.25
298 TestPause/serial/Unpause 0.68
299 TestPause/serial/PauseAgain 0.85
300 TestPause/serial/DeletePaused 1.02
301 TestPause/serial/VerifyDeletedResources 0.26
303 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.13
304 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.3
305 TestStartStop/group/no-preload/serial/DeployApp 10.3
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.01
308 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.04
311 TestStartStop/group/newest-cni/serial/FirstStart 59.56
312 TestStartStop/group/newest-cni/serial/DeployApp 0
313 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.07
314 TestStartStop/group/newest-cni/serial/Stop 11.32
315 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
316 TestStartStop/group/newest-cni/serial/SecondStart 40.6
317 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
318 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
319 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
320 TestStartStop/group/newest-cni/serial/Pause 2.35
322 TestStartStop/group/embed-certs/serial/FirstStart 102.94
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 684.9
328 TestStartStop/group/no-preload/serial/SecondStart 627
329 TestStartStop/group/embed-certs/serial/DeployApp 10.26
330 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.96
332 TestStartStop/group/old-k8s-version/serial/Stop 2.55
333 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
336 TestStartStop/group/embed-certs/serial/SecondStart 422.26
345 TestNetworkPlugins/group/auto/Start 99.32
346 TestNetworkPlugins/group/kindnet/Start 70.98
347 TestNetworkPlugins/group/calico/Start 115.67
348 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
349 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
350 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
351 TestNetworkPlugins/group/auto/KubeletFlags 0.22
352 TestNetworkPlugins/group/auto/NetCatPod 12.22
353 TestNetworkPlugins/group/kindnet/DNS 0.2
354 TestNetworkPlugins/group/kindnet/Localhost 0.23
355 TestNetworkPlugins/group/kindnet/HairPin 0.16
356 TestNetworkPlugins/group/auto/DNS 0.19
357 TestNetworkPlugins/group/auto/Localhost 0.2
358 TestNetworkPlugins/group/auto/HairPin 0.17
359 TestNetworkPlugins/group/custom-flannel/Start 86
360 TestNetworkPlugins/group/enable-default-cni/Start 80.9
361 TestNetworkPlugins/group/calico/ControllerPod 6.01
362 TestNetworkPlugins/group/calico/KubeletFlags 0.21
363 TestNetworkPlugins/group/calico/NetCatPod 10.33
364 TestNetworkPlugins/group/calico/DNS 0.38
365 TestNetworkPlugins/group/calico/Localhost 0.17
366 TestNetworkPlugins/group/calico/HairPin 0.14
367 TestNetworkPlugins/group/flannel/Start 86.66
368 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
369 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.29
370 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
371 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.36
372 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.3
373 TestStartStop/group/embed-certs/serial/Pause 3.11
374 TestNetworkPlugins/group/bridge/Start 60.13
375 TestNetworkPlugins/group/custom-flannel/DNS 0.18
376 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
377 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
381 TestNetworkPlugins/group/flannel/ControllerPod 6.01
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
383 TestNetworkPlugins/group/flannel/NetCatPod 10.23
384 TestNetworkPlugins/group/flannel/DNS 0.16
385 TestNetworkPlugins/group/flannel/Localhost 0.16
386 TestNetworkPlugins/group/flannel/HairPin 0.15
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
388 TestNetworkPlugins/group/bridge/NetCatPod 10.22
389 TestNetworkPlugins/group/bridge/DNS 0.16
390 TestNetworkPlugins/group/bridge/Localhost 0.13
391 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (31.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-513879 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-513879 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (31.676189384s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (31.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-513879
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-513879: exit status 85 (86.87367ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-513879 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC |          |
	|         | -p download-only-513879        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|         | --driver=kvm2                  |                      |         |                |                     |          |
	|         | --container-runtime=crio       |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 23:37:37
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 23:37:37.610641   14909 out.go:291] Setting OutFile to fd 1 ...
	I0415 23:37:37.610797   14909 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:37:37.610807   14909 out.go:304] Setting ErrFile to fd 2...
	I0415 23:37:37.610815   14909 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:37:37.611093   14909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	W0415 23:37:37.611276   14909 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18647-7542/.minikube/config/config.json: open /home/jenkins/minikube-integration/18647-7542/.minikube/config/config.json: no such file or directory
	I0415 23:37:37.611978   14909 out.go:298] Setting JSON to true
	I0415 23:37:37.613025   14909 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1202,"bootTime":1713223056,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 23:37:37.613103   14909 start.go:139] virtualization: kvm guest
	I0415 23:37:37.616780   14909 out.go:97] [download-only-513879] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0415 23:37:37.619344   14909 out.go:169] MINIKUBE_LOCATION=18647
	W0415 23:37:37.617022   14909 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball: no such file or directory
	I0415 23:37:37.617188   14909 notify.go:220] Checking for updates...
	I0415 23:37:37.623775   14909 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 23:37:37.625969   14909 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0415 23:37:37.628013   14909 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:37:37.630429   14909 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0415 23:37:37.634360   14909 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 23:37:37.634764   14909 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 23:37:38.190887   14909 out.go:97] Using the kvm2 driver based on user configuration
	I0415 23:37:38.190942   14909 start.go:297] selected driver: kvm2
	I0415 23:37:38.190950   14909 start.go:901] validating driver "kvm2" against <nil>
	I0415 23:37:38.191448   14909 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 23:37:38.191621   14909 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0415 23:37:38.209572   14909 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0415 23:37:38.209635   14909 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 23:37:38.210191   14909 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0415 23:37:38.210363   14909 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 23:37:38.210437   14909 cni.go:84] Creating CNI manager for ""
	I0415 23:37:38.210453   14909 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0415 23:37:38.210464   14909 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 23:37:38.210526   14909 start.go:340] cluster config:
	{Name:download-only-513879 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-513879 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:37:38.210718   14909 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 23:37:38.213334   14909 out.go:97] Downloading VM boot image ...
	I0415 23:37:38.213390   14909 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18647-7542/.minikube/cache/iso/amd64/minikube-v1.33.0-1713175573-18634-amd64.iso
	I0415 23:37:46.560497   14909 out.go:97] Starting "download-only-513879" primary control-plane node in "download-only-513879" cluster
	I0415 23:37:46.560558   14909 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0415 23:37:46.657750   14909 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0415 23:37:46.657802   14909 cache.go:56] Caching tarball of preloaded images
	I0415 23:37:46.657996   14909 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0415 23:37:46.660520   14909 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0415 23:37:46.660558   14909 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0415 23:37:46.759661   14909 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0415 23:38:00.324806   14909 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0415 23:38:00.324909   14909 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0415 23:38:01.318516   14909 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0415 23:38:01.318877   14909 profile.go:143] Saving config to /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/download-only-513879/config.json ...
	I0415 23:38:01.318910   14909 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/download-only-513879/config.json: {Name:mkc4845af8a1679c1c4392782695ebdc53519a15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0415 23:38:01.319100   14909 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0415 23:38:01.319323   14909 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18647-7542/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-513879 host does not exist
	  To start a cluster, run: "minikube start -p download-only-513879"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-513879
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-779412 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-779412 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.00179725s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (14.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-779412
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-779412: exit status 85 (91.522918ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-513879 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC |                     |
	|         | -p download-only-513879        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:38 UTC | 15 Apr 24 23:38 UTC |
	| delete  | -p download-only-513879        | download-only-513879 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:38 UTC | 15 Apr 24 23:38 UTC |
	| start   | -o=json --download-only        | download-only-779412 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:38 UTC |                     |
	|         | -p download-only-779412        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|         | --driver=kvm2                  |                      |         |                |                     |                     |
	|         | --container-runtime=crio       |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 23:38:09
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 23:38:09.703201   15189 out.go:291] Setting OutFile to fd 1 ...
	I0415 23:38:09.703333   15189 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:38:09.703355   15189 out.go:304] Setting ErrFile to fd 2...
	I0415 23:38:09.703362   15189 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:38:09.703565   15189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0415 23:38:09.704234   15189 out.go:298] Setting JSON to true
	I0415 23:38:09.705217   15189 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1234,"bootTime":1713223056,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 23:38:09.705288   15189 start.go:139] virtualization: kvm guest
	I0415 23:38:09.707900   15189 out.go:97] [download-only-779412] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0415 23:38:09.709562   15189 out.go:169] MINIKUBE_LOCATION=18647
	I0415 23:38:09.708149   15189 notify.go:220] Checking for updates...
	I0415 23:38:09.712415   15189 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 23:38:09.714004   15189 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0415 23:38:09.715775   15189 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:38:09.717319   15189 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0415 23:38:09.720021   15189 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 23:38:09.720390   15189 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 23:38:09.762526   15189 out.go:97] Using the kvm2 driver based on user configuration
	I0415 23:38:09.762568   15189 start.go:297] selected driver: kvm2
	I0415 23:38:09.762574   15189 start.go:901] validating driver "kvm2" against <nil>
	I0415 23:38:09.763009   15189 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 23:38:09.763128   15189 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0415 23:38:09.780835   15189 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0415 23:38:09.780904   15189 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 23:38:09.781469   15189 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0415 23:38:09.781667   15189 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 23:38:09.781766   15189 cni.go:84] Creating CNI manager for ""
	I0415 23:38:09.781782   15189 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0415 23:38:09.781791   15189 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 23:38:09.781869   15189 start.go:340] cluster config:
	{Name:download-only-779412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-779412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:38:09.781994   15189 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 23:38:09.784312   15189 out.go:97] Starting "download-only-779412" primary control-plane node in "download-only-779412" cluster
	I0415 23:38:09.784343   15189 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0415 23:38:09.908145   15189 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	I0415 23:38:09.908182   15189 cache.go:56] Caching tarball of preloaded images
	I0415 23:38:09.908406   15189 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime crio
	I0415 23:38:09.910519   15189 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0415 23:38:09.910547   15189 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4 ...
	I0415 23:38:10.015404   15189 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:6f4e94cb6232b24c3932ab20b1ee6dad -> /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-779412 host does not exist
	  To start a cluster, run: "minikube start -p download-only-779412"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-779412
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/json-events (11.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-533993 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-533993 --force --alsologtostderr --kubernetes-version=v1.30.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.554662406s)
--- PASS: TestDownloadOnly/v1.30.0-rc.2/json-events (11.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-533993
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-533993: exit status 85 (93.17213ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-513879 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:37 UTC |                     |
	|         | -p download-only-513879           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:38 UTC | 15 Apr 24 23:38 UTC |
	| delete  | -p download-only-513879           | download-only-513879 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:38 UTC | 15 Apr 24 23:38 UTC |
	| start   | -o=json --download-only           | download-only-779412 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:38 UTC |                     |
	|         | -p download-only-779412           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3      |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:38 UTC | 15 Apr 24 23:38 UTC |
	| delete  | -p download-only-779412           | download-only-779412 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:38 UTC | 15 Apr 24 23:38 UTC |
	| start   | -o=json --download-only           | download-only-533993 | jenkins | v1.33.0-beta.0 | 15 Apr 24 23:38 UTC |                     |
	|         | -p download-only-533993           |                      |         |                |                     |                     |
	|         | --force --alsologtostderr         |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-rc.2 |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|         | --driver=kvm2                     |                      |         |                |                     |                     |
	|         | --container-runtime=crio          |                      |         |                |                     |                     |
	|---------|-----------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/15 23:38:24
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0415 23:38:24.145885   15392 out.go:291] Setting OutFile to fd 1 ...
	I0415 23:38:24.146042   15392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:38:24.146057   15392 out.go:304] Setting ErrFile to fd 2...
	I0415 23:38:24.146064   15392 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:38:24.146320   15392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0415 23:38:24.146983   15392 out.go:298] Setting JSON to true
	I0415 23:38:24.147930   15392 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1248,"bootTime":1713223056,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 23:38:24.148000   15392 start.go:139] virtualization: kvm guest
	I0415 23:38:24.150882   15392 out.go:97] [download-only-533993] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0415 23:38:24.152865   15392 out.go:169] MINIKUBE_LOCATION=18647
	I0415 23:38:24.151148   15392 notify.go:220] Checking for updates...
	I0415 23:38:24.156557   15392 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 23:38:24.158654   15392 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0415 23:38:24.160692   15392 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:38:24.163405   15392 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0415 23:38:24.166967   15392 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0415 23:38:24.167412   15392 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 23:38:24.205223   15392 out.go:97] Using the kvm2 driver based on user configuration
	I0415 23:38:24.205258   15392 start.go:297] selected driver: kvm2
	I0415 23:38:24.205264   15392 start.go:901] validating driver "kvm2" against <nil>
	I0415 23:38:24.205632   15392 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 23:38:24.205730   15392 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18647-7542/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0415 23:38:24.222436   15392 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.0-beta.0
	I0415 23:38:24.222516   15392 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0415 23:38:24.223093   15392 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0415 23:38:24.223257   15392 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0415 23:38:24.223329   15392 cni.go:84] Creating CNI manager for ""
	I0415 23:38:24.223343   15392 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0415 23:38:24.223351   15392 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0415 23:38:24.223412   15392 start.go:340] cluster config:
	{Name:download-only-533993 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-rc.2 ClusterName:download-only-533993 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:38:24.223522   15392 iso.go:125] acquiring lock: {Name:mk848ef90fbc2a1876645fc8fc16af382c3bcaa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0415 23:38:24.226315   15392 out.go:97] Starting "download-only-533993" primary control-plane node in "download-only-533993" cluster
	I0415 23:38:24.226361   15392 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0415 23:38:24.319681   15392 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0415 23:38:24.319716   15392 cache.go:56] Caching tarball of preloaded images
	I0415 23:38:24.319905   15392 preload.go:132] Checking if preload exists for k8s version v1.30.0-rc.2 and runtime crio
	I0415 23:38:24.322494   15392 out.go:97] Downloading Kubernetes v1.30.0-rc.2 preload ...
	I0415 23:38:24.322536   15392 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0415 23:38:24.420011   15392 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-rc.2/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:3f21ab668c1533072cd1f73a92db63f3 -> /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0415 23:38:34.031158   15392 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0415 23:38:34.031275   15392 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18647-7542/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-533993 host does not exist
	  To start a cluster, run: "minikube start -p download-only-533993"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-rc.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAll (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-533993
--- PASS: TestDownloadOnly/v1.30.0-rc.2/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.64s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-483637 --alsologtostderr --binary-mirror http://127.0.0.1:35999 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-483637" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-483637
--- PASS: TestBinaryMirror (0.64s)

                                                
                                    
x
+
TestOffline (102.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-847093 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-847093 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m41.281326369s)
helpers_test.go:175: Cleaning up "offline-crio-847093" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-847093
--- PASS: TestOffline (102.27s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-045739
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-045739: exit status 85 (87.761971ms)

                                                
                                                
-- stdout --
	* Profile "addons-045739" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-045739"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-045739
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-045739: exit status 85 (88.469709ms)

                                                
                                                
-- stdout --
	* Profile "addons-045739" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-045739"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (222.9s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-045739 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-045739 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m42.9026728s)
--- PASS: TestAddons/Setup (222.90s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 29.603745ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-gjtmh" [c0b8d4f6-9fd8-4bc0-b4b8-bc1142309612] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005343716s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vlqgc" [10079756-cd80-4336-8ed3-d4418b38b5de] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.007286171s
addons_test.go:340: (dbg) Run:  kubectl --context addons-045739 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-045739 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-045739 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.012595082s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-045739 ip
2024/04/15 23:42:38 [DEBUG] GET http://192.168.39.182:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-045739 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.29s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.64s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gtxm8" [6ace3b50-acfe-41f3-b89a-6978d97a440e] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00549712s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-045739
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-045739: (6.633724555s)
--- PASS: TestAddons/parallel/InspektorGadget (12.64s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.12s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 29.71289ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-75d6c48ddd-2sm8z" [3d9ee9ce-539d-4a71-bcb5-c51e28fbd314] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007692003s
addons_test.go:415: (dbg) Run:  kubectl --context addons-045739 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-045739 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-amd64 -p addons-045739 addons disable metrics-server --alsologtostderr -v=1: (1.005497602s)
--- PASS: TestAddons/parallel/MetricsServer (6.12s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (17.8s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 28.980638ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-g5gcg" [cd362247-164a-4db8-b30a-9c2113f148f2] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.008716086s
addons_test.go:473: (dbg) Run:  kubectl --context addons-045739 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-045739 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.953752832s)
addons_test.go:478: kubectl --context addons-045739 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:473: (dbg) Run:  kubectl --context addons-045739 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-045739 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.457634324s)
addons_test.go:478: kubectl --context addons-045739 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-045739 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (17.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68.05s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 30.710937ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-045739 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-045739 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f6c6210d-56f9-41ee-b3aa-2b4c4135b62d] Pending
helpers_test.go:344: "task-pv-pod" [f6c6210d-56f9-41ee-b3aa-2b4c4135b62d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f6c6210d-56f9-41ee-b3aa-2b4c4135b62d] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.004683537s
addons_test.go:584: (dbg) Run:  kubectl --context addons-045739 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-045739 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-045739 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-045739 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-045739 delete pod task-pv-pod: (1.507397273s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-045739 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-045739 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-045739 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0ce58b8c-7b11-43a6-90f9-7f849b87259c] Pending
helpers_test.go:344: "task-pv-pod-restore" [0ce58b8c-7b11-43a6-90f9-7f849b87259c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0ce58b8c-7b11-43a6-90f9-7f849b87259c] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.006338439s
addons_test.go:626: (dbg) Run:  kubectl --context addons-045739 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-045739 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-045739 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-045739 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-045739 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.161811282s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-045739 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-linux-amd64 -p addons-045739 addons disable volumesnapshots --alsologtostderr -v=1: (1.052941969s)
--- PASS: TestAddons/parallel/CSI (68.05s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-045739 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-045739 --alsologtostderr -v=1: (1.41374157s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5b77dbd7c4-4bkgx" [b9e0936d-5b67-4653-8948-3379b22d134c] Pending
helpers_test.go:344: "headlamp-5b77dbd7c4-4bkgx" [b9e0936d-5b67-4653-8948-3379b22d134c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5b77dbd7c4-4bkgx" [b9e0936d-5b67-4653-8948-3379b22d134c] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.005789293s
--- PASS: TestAddons/parallel/Headlamp (18.42s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5446596998-blf5l" [08f3fdd3-95ad-4d7f-b1d7-08e452197b18] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005050957s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-045739
--- PASS: TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (24.42s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-045739 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-045739 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-045739 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1a082e6a-d76e-489d-ad06-9168a14b9da0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1a082e6a-d76e-489d-ad06-9168a14b9da0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1a082e6a-d76e-489d-ad06-9168a14b9da0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004737737s
addons_test.go:891: (dbg) Run:  kubectl --context addons-045739 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-045739 ssh "cat /opt/local-path-provisioner/pvc-f01537f6-92ca-4150-b63c-0f2e634b097f_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-045739 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-045739 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-045739 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (24.42s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.76s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-742pq" [bfe2588d-264a-493b-a8ec-b82e9c1a873d] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.007139035s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-045739
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.76s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-n6g9b" [1b344559-5993-46da-9d10-2d43f53cb585] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005178028s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-045739 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-045739 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestCertOptions (68.66s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-752506 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-752506 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m7.152366936s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-752506 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-752506 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-752506 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-752506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-752506
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-752506: (1.02286312s)
--- PASS: TestCertOptions (68.66s)

                                                
                                    
x
+
TestCertExpiration (278.95s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-359535 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-359535 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (57.790322945s)
E0416 00:48:41.726221   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0416 00:48:58.680146   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-359535 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-359535 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (40.178184521s)
helpers_test.go:175: Cleaning up "cert-expiration-359535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-359535
E0416 00:52:20.169712   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
--- PASS: TestCertExpiration (278.95s)

                                                
                                    
x
+
TestForceSystemdFlag (81.66s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-200746 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-200746 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m20.082320587s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-200746 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-200746" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-200746
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-200746: (1.37036662s)
--- PASS: TestForceSystemdFlag (81.66s)

                                                
                                    
x
+
TestForceSystemdEnv (47.41s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-787358 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-787358 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.441763741s)
helpers_test.go:175: Cleaning up "force-systemd-env-787358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-787358
--- PASS: TestForceSystemdEnv (47.41s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.05s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.05s)

                                                
                                    
x
+
TestErrorSpam/setup (42.25s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-306763 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-306763 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-306763 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-306763 --driver=kvm2  --container-runtime=crio: (42.254250872s)
--- PASS: TestErrorSpam/setup (42.25s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306763 --log_dir /tmp/nospam-306763 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306763 --log_dir /tmp/nospam-306763 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306763 --log_dir /tmp/nospam-306763 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306763 --log_dir /tmp/nospam-306763 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306763 --log_dir /tmp/nospam-306763 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306763 --log_dir /tmp/nospam-306763 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.56s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306763 --log_dir /tmp/nospam-306763 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306763 --log_dir /tmp/nospam-306763 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306763 --log_dir /tmp/nospam-306763 pause
--- PASS: TestErrorSpam/pause (1.56s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306763 --log_dir /tmp/nospam-306763 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306763 --log_dir /tmp/nospam-306763 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306763 --log_dir /tmp/nospam-306763 unpause
--- PASS: TestErrorSpam/unpause (1.59s)

                                                
                                    
x
+
TestErrorSpam/stop (4.68s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306763 --log_dir /tmp/nospam-306763 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-306763 --log_dir /tmp/nospam-306763 stop: (2.307420341s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306763 --log_dir /tmp/nospam-306763 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-306763 --log_dir /tmp/nospam-306763 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-306763 --log_dir /tmp/nospam-306763 stop: (1.393919098s)
--- PASS: TestErrorSpam/stop (4.68s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18647-7542/.minikube/files/etc/test/nested/copy/14897/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (98.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-596616 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-596616 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m38.251055369s)
--- PASS: TestFunctional/serial/StartWithProxy (98.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (122.57s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-596616 --alsologtostderr -v=8
E0415 23:52:20.169513   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0415 23:52:20.175188   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0415 23:52:20.185418   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0415 23:52:20.205646   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0415 23:52:20.245882   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0415 23:52:20.326179   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0415 23:52:20.486567   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0415 23:52:20.807185   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0415 23:52:21.448081   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0415 23:52:22.728656   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0415 23:52:25.289600   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0415 23:52:30.410694   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0415 23:52:40.651248   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-596616 --alsologtostderr -v=8: (2m2.572798579s)
functional_test.go:659: soft start took 2m2.573449401s for "functional-596616" cluster.
--- PASS: TestFunctional/serial/SoftStart (122.57s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-596616 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-596616 cache add registry.k8s.io/pause:3.1: (1.075322138s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-596616 cache add registry.k8s.io/pause:3.3: (1.237826197s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-596616 cache add registry.k8s.io/pause:latest: (1.16438698s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-596616 /tmp/TestFunctionalserialCacheCmdcacheadd_local4072069370/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 cache add minikube-local-cache-test:functional-596616
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-596616 cache add minikube-local-cache-test:functional-596616: (1.781175608s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 cache delete minikube-local-cache-test:functional-596616
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-596616
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596616 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (214.299732ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.67s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 kubectl -- --context functional-596616 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-596616 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (52.98s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-596616 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0415 23:53:01.132175   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0415 23:53:42.093626   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-596616 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (52.974930525s)
functional_test.go:757: restart took 52.975047616s for "functional-596616" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (52.98s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-596616 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-596616 logs: (1.40697169s)
--- PASS: TestFunctional/serial/LogsCmd (1.41s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 logs --file /tmp/TestFunctionalserialLogsFileCmd2481803712/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-596616 logs --file /tmp/TestFunctionalserialLogsFileCmd2481803712/001/logs.txt: (1.383485613s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.51s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-596616 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-596616
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-596616: exit status 115 (279.587016ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.86:32662 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-596616 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-596616 delete -f testdata/invalidsvc.yaml: (2.04107503s)
--- PASS: TestFunctional/serial/InvalidService (5.51s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596616 config get cpus: exit status 14 (65.188186ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596616 config get cpus: exit status 14 (58.560041ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (22.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-596616 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-596616 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23458: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (22.68s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-596616 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-596616 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (161.55879ms)

                                                
                                                
-- stdout --
	* [functional-596616] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 23:54:00.205031   23254 out.go:291] Setting OutFile to fd 1 ...
	I0415 23:54:00.205140   23254 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:54:00.205149   23254 out.go:304] Setting ErrFile to fd 2...
	I0415 23:54:00.205153   23254 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:54:00.205336   23254 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0415 23:54:00.206138   23254 out.go:298] Setting JSON to false
	I0415 23:54:00.207045   23254 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2184,"bootTime":1713223056,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 23:54:00.207100   23254 start.go:139] virtualization: kvm guest
	I0415 23:54:00.209424   23254 out.go:177] * [functional-596616] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0415 23:54:00.210788   23254 notify.go:220] Checking for updates...
	I0415 23:54:00.210794   23254 out.go:177]   - MINIKUBE_LOCATION=18647
	I0415 23:54:00.212133   23254 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 23:54:00.213343   23254 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0415 23:54:00.214845   23254 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:54:00.216186   23254 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0415 23:54:00.217551   23254 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 23:54:00.219383   23254 config.go:182] Loaded profile config "functional-596616": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:54:00.219945   23254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:54:00.220014   23254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:54:00.242329   23254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35433
	I0415 23:54:00.242734   23254 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:54:00.243322   23254 main.go:141] libmachine: Using API Version  1
	I0415 23:54:00.243345   23254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:54:00.243798   23254 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:54:00.243990   23254 main.go:141] libmachine: (functional-596616) Calling .DriverName
	I0415 23:54:00.244244   23254 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 23:54:00.244520   23254 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:54:00.244552   23254 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:54:00.260045   23254 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41687
	I0415 23:54:00.260522   23254 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:54:00.261008   23254 main.go:141] libmachine: Using API Version  1
	I0415 23:54:00.261031   23254 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:54:00.261352   23254 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:54:00.261545   23254 main.go:141] libmachine: (functional-596616) Calling .DriverName
	I0415 23:54:00.296532   23254 out.go:177] * Using the kvm2 driver based on existing profile
	I0415 23:54:00.297853   23254 start.go:297] selected driver: kvm2
	I0415 23:54:00.297872   23254 start.go:901] validating driver "kvm2" against &{Name:functional-596616 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-596616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:54:00.297988   23254 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 23:54:00.300475   23254 out.go:177] 
	W0415 23:54:00.301746   23254 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0415 23:54:00.303045   23254 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-596616 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-596616 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-596616 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (164.547082ms)

                                                
                                                
-- stdout --
	* [functional-596616] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0415 23:54:00.051482   23198 out.go:291] Setting OutFile to fd 1 ...
	I0415 23:54:00.052423   23198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:54:00.052439   23198 out.go:304] Setting ErrFile to fd 2...
	I0415 23:54:00.052447   23198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0415 23:54:00.053127   23198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0415 23:54:00.055089   23198 out.go:298] Setting JSON to false
	I0415 23:54:00.056220   23198 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2184,"bootTime":1713223056,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0415 23:54:00.056309   23198 start.go:139] virtualization: kvm guest
	I0415 23:54:00.058528   23198 out.go:177] * [functional-596616] minikube v1.33.0-beta.0 sur Ubuntu 20.04 (kvm/amd64)
	I0415 23:54:00.060179   23198 out.go:177]   - MINIKUBE_LOCATION=18647
	I0415 23:54:00.060187   23198 notify.go:220] Checking for updates...
	I0415 23:54:00.061429   23198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0415 23:54:00.062840   23198 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0415 23:54:00.065615   23198 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0415 23:54:00.066823   23198 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0415 23:54:00.068202   23198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0415 23:54:00.069918   23198 config.go:182] Loaded profile config "functional-596616": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0415 23:54:00.070498   23198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:54:00.070584   23198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:54:00.086416   23198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40093
	I0415 23:54:00.086790   23198 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:54:00.087306   23198 main.go:141] libmachine: Using API Version  1
	I0415 23:54:00.087326   23198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:54:00.087628   23198 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:54:00.087791   23198 main.go:141] libmachine: (functional-596616) Calling .DriverName
	I0415 23:54:00.088046   23198 driver.go:392] Setting default libvirt URI to qemu:///system
	I0415 23:54:00.088452   23198 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0415 23:54:00.088494   23198 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0415 23:54:00.103282   23198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34309
	I0415 23:54:00.103675   23198 main.go:141] libmachine: () Calling .GetVersion
	I0415 23:54:00.104146   23198 main.go:141] libmachine: Using API Version  1
	I0415 23:54:00.104179   23198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0415 23:54:00.104583   23198 main.go:141] libmachine: () Calling .GetMachineName
	I0415 23:54:00.104791   23198 main.go:141] libmachine: (functional-596616) Calling .DriverName
	I0415 23:54:00.137581   23198 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0415 23:54:00.138917   23198 start.go:297] selected driver: kvm2
	I0415 23:54:00.138929   23198 start.go:901] validating driver "kvm2" against &{Name:functional-596616 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18634/minikube-v1.33.0-1713175573-18634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713215244-18647@sha256:4eb69c9ed3e92807cea9443b515ec5d46db84479de7669694de8c98e2d40c4af Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:functional-596616 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.86 Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0415 23:54:00.139037   23198 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0415 23:54:00.141201   23198 out.go:177] 
	W0415 23:54:00.142476   23198 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0415 23:54:00.143886   23198 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-596616 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-596616 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-6b2wt" [9161a5e9-5361-4071-8575-f29314da3fdd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-6b2wt" [9161a5e9-5361-4071-8575-f29314da3fdd] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004048562s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.86:31224
functional_test.go:1671: http://192.168.39.86:31224: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-6b2wt

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.86:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.86:31224
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [57132ba8-0fc1-43e5-a29c-9bd4bf8b065a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003762179s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-596616 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-596616 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-596616 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-596616 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-596616 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c84c0f72-a5a2-4563-a31b-764bdee3e905] Pending
helpers_test.go:344: "sp-pod" [c84c0f72-a5a2-4563-a31b-764bdee3e905] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c84c0f72-a5a2-4563-a31b-764bdee3e905] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.004882744s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-596616 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-596616 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-596616 delete -f testdata/storage-provisioner/pod.yaml: (4.06564394s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-596616 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [896f62ba-e4d0-4a8c-bcfd-7d544dfac239] Pending
helpers_test.go:344: "sp-pod" [896f62ba-e4d0-4a8c-bcfd-7d544dfac239] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [896f62ba-e4d0-4a8c-bcfd-7d544dfac239] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.14509571s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-596616 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh -n functional-596616 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 cp functional-596616:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd682520976/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh -n functional-596616 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh -n functional-596616 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-596616 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-4xfq2" [e2dcb21e-df65-4ea9-8009-e59641bb66b0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-4xfq2" [e2dcb21e-df65-4ea9-8009-e59641bb66b0] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.004521288s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-596616 exec mysql-859648c796-4xfq2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-596616 exec mysql-859648c796-4xfq2 -- mysql -ppassword -e "show databases;": exit status 1 (125.756191ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-596616 exec mysql-859648c796-4xfq2 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-596616 exec mysql-859648c796-4xfq2 -- mysql -ppassword -e "show databases;": exit status 1 (166.57989ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-596616 exec mysql-859648c796-4xfq2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.95s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/14897/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "sudo cat /etc/test/nested/copy/14897/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/14897.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "sudo cat /etc/ssl/certs/14897.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/14897.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "sudo cat /usr/share/ca-certificates/14897.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/148972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "sudo cat /etc/ssl/certs/148972.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/148972.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "sudo cat /usr/share/ca-certificates/148972.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-596616 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596616 ssh "sudo systemctl is-active docker": exit status 1 (229.409803ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596616 ssh "sudo systemctl is-active containerd": exit status 1 (246.819895ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-596616 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-596616 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-wplcz" [23eba21f-a86b-4259-bf7a-958d80e1ee4d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-wplcz" [23eba21f-a86b-4259-bf7a-958d80e1ee4d] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.007653086s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-596616 /tmp/TestFunctionalparallelMountCmdany-port4030095305/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1713225238896473164" to /tmp/TestFunctionalparallelMountCmdany-port4030095305/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1713225238896473164" to /tmp/TestFunctionalparallelMountCmdany-port4030095305/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1713225238896473164" to /tmp/TestFunctionalparallelMountCmdany-port4030095305/001/test-1713225238896473164
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596616 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (269.220974ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 15 23:53 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 15 23:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 15 23:53 test-1713225238896473164
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh cat /mount-9p/test-1713225238896473164
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-596616 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ad68faae-059e-4540-be2c-0ce61527a366] Pending
helpers_test.go:344: "busybox-mount" [ad68faae-059e-4540-be2c-0ce61527a366] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ad68faae-059e-4540-be2c-0ce61527a366] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ad68faae-059e-4540-be2c-0ce61527a366] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.003656236s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-596616 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-596616 /tmp/TestFunctionalparallelMountCmdany-port4030095305/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (11.74s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "229.910391ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "64.367388ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "295.76559ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "68.227876ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 service list -o json
functional_test.go:1490: Took "441.867811ms" to run "out/minikube-linux-amd64 -p functional-596616 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.86:31911
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-596616 /tmp/TestFunctionalparallelMountCmdspecific-port379457606/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596616 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (199.505774ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-596616 /tmp/TestFunctionalparallelMountCmdspecific-port379457606/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596616 ssh "sudo umount -f /mount-9p": exit status 1 (246.875528ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-596616 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-596616 /tmp/TestFunctionalparallelMountCmdspecific-port379457606/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.86:31911
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-596616 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3075892006/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-596616 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3075892006/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-596616 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3075892006/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596616 ssh "findmnt -T" /mount1: exit status 1 (281.963712ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-596616 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-596616 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3075892006/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-596616 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3075892006/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-596616 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3075892006/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-596616 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-596616
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-596616 image ls --format short --alsologtostderr:
I0415 23:54:39.216373   25141 out.go:291] Setting OutFile to fd 1 ...
I0415 23:54:39.216493   25141 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:54:39.216504   25141 out.go:304] Setting ErrFile to fd 2...
I0415 23:54:39.216510   25141 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:54:39.216777   25141 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
I0415 23:54:39.217509   25141 config.go:182] Loaded profile config "functional-596616": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0415 23:54:39.217626   25141 config.go:182] Loaded profile config "functional-596616": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0415 23:54:39.218156   25141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0415 23:54:39.218200   25141 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 23:54:39.233809   25141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34431
I0415 23:54:39.234350   25141 main.go:141] libmachine: () Calling .GetVersion
I0415 23:54:39.234980   25141 main.go:141] libmachine: Using API Version  1
I0415 23:54:39.235003   25141 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 23:54:39.235374   25141 main.go:141] libmachine: () Calling .GetMachineName
I0415 23:54:39.235541   25141 main.go:141] libmachine: (functional-596616) Calling .GetState
I0415 23:54:39.237448   25141 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0415 23:54:39.237474   25141 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 23:54:39.258244   25141 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45075
I0415 23:54:39.258597   25141 main.go:141] libmachine: () Calling .GetVersion
I0415 23:54:39.259398   25141 main.go:141] libmachine: Using API Version  1
I0415 23:54:39.259443   25141 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 23:54:39.259851   25141 main.go:141] libmachine: () Calling .GetMachineName
I0415 23:54:39.260058   25141 main.go:141] libmachine: (functional-596616) Calling .DriverName
I0415 23:54:39.260300   25141 ssh_runner.go:195] Run: systemctl --version
I0415 23:54:39.260332   25141 main.go:141] libmachine: (functional-596616) Calling .GetSSHHostname
I0415 23:54:39.263733   25141 main.go:141] libmachine: (functional-596616) DBG | domain functional-596616 has defined MAC address 52:54:00:2d:fc:0d in network mk-functional-596616
I0415 23:54:39.264231   25141 main.go:141] libmachine: (functional-596616) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:fc:0d", ip: ""} in network mk-functional-596616: {Iface:virbr1 ExpiryTime:2024-04-16 00:49:23 +0000 UTC Type:0 Mac:52:54:00:2d:fc:0d Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:functional-596616 Clientid:01:52:54:00:2d:fc:0d}
I0415 23:54:39.264314   25141 main.go:141] libmachine: (functional-596616) DBG | domain functional-596616 has defined IP address 192.168.39.86 and MAC address 52:54:00:2d:fc:0d in network mk-functional-596616
I0415 23:54:39.264532   25141 main.go:141] libmachine: (functional-596616) Calling .GetSSHPort
I0415 23:54:39.264685   25141 main.go:141] libmachine: (functional-596616) Calling .GetSSHKeyPath
I0415 23:54:39.264820   25141 main.go:141] libmachine: (functional-596616) Calling .GetSSHUsername
I0415 23:54:39.264955   25141 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/functional-596616/id_rsa Username:docker}
I0415 23:54:39.366547   25141 ssh_runner.go:195] Run: sudo crictl images --output json
I0415 23:54:39.442188   25141 main.go:141] libmachine: Making call to close driver server
I0415 23:54:39.442204   25141 main.go:141] libmachine: (functional-596616) Calling .Close
I0415 23:54:39.442524   25141 main.go:141] libmachine: Successfully made call to close driver server
I0415 23:54:39.442546   25141 main.go:141] libmachine: (functional-596616) DBG | Closing plugin on server side
I0415 23:54:39.442579   25141 main.go:141] libmachine: Making call to close connection to plugin binary
I0415 23:54:39.442600   25141 main.go:141] libmachine: Making call to close driver server
I0415 23:54:39.442613   25141 main.go:141] libmachine: (functional-596616) Calling .Close
I0415 23:54:39.442896   25141 main.go:141] libmachine: (functional-596616) DBG | Closing plugin on server side
I0415 23:54:39.442937   25141 main.go:141] libmachine: Successfully made call to close driver server
I0415 23:54:39.442952   25141 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-596616 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.29.3            | a1d263b5dc5b0 | 83.6MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-controller-manager | v1.29.3            | 6052a25da3f97 | 123MB  |
| registry.k8s.io/kube-scheduler          | v1.29.3            | 8c390d98f50c0 | 60.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-596616  | 3aa305736b661 | 3.33kB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | c613f16b66424 | 191MB  |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-apiserver          | v1.29.3            | 39f995c9f1996 | 129MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-596616 image ls --format table --alsologtostderr:
I0415 23:54:39.504882   25200 out.go:291] Setting OutFile to fd 1 ...
I0415 23:54:39.505365   25200 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:54:39.505405   25200 out.go:304] Setting ErrFile to fd 2...
I0415 23:54:39.505422   25200 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:54:39.505855   25200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
I0415 23:54:39.507089   25200 config.go:182] Loaded profile config "functional-596616": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0415 23:54:39.507246   25200 config.go:182] Loaded profile config "functional-596616": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0415 23:54:39.507656   25200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0415 23:54:39.507705   25200 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 23:54:39.527153   25200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35975
I0415 23:54:39.527539   25200 main.go:141] libmachine: () Calling .GetVersion
I0415 23:54:39.528021   25200 main.go:141] libmachine: Using API Version  1
I0415 23:54:39.528044   25200 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 23:54:39.528458   25200 main.go:141] libmachine: () Calling .GetMachineName
I0415 23:54:39.528618   25200 main.go:141] libmachine: (functional-596616) Calling .GetState
I0415 23:54:39.530415   25200 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0415 23:54:39.530449   25200 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 23:54:39.544434   25200 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40177
I0415 23:54:39.544758   25200 main.go:141] libmachine: () Calling .GetVersion
I0415 23:54:39.545313   25200 main.go:141] libmachine: Using API Version  1
I0415 23:54:39.545340   25200 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 23:54:39.545698   25200 main.go:141] libmachine: () Calling .GetMachineName
I0415 23:54:39.545910   25200 main.go:141] libmachine: (functional-596616) Calling .DriverName
I0415 23:54:39.546143   25200 ssh_runner.go:195] Run: systemctl --version
I0415 23:54:39.546171   25200 main.go:141] libmachine: (functional-596616) Calling .GetSSHHostname
I0415 23:54:39.548458   25200 main.go:141] libmachine: (functional-596616) DBG | domain functional-596616 has defined MAC address 52:54:00:2d:fc:0d in network mk-functional-596616
I0415 23:54:39.548881   25200 main.go:141] libmachine: (functional-596616) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:fc:0d", ip: ""} in network mk-functional-596616: {Iface:virbr1 ExpiryTime:2024-04-16 00:49:23 +0000 UTC Type:0 Mac:52:54:00:2d:fc:0d Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:functional-596616 Clientid:01:52:54:00:2d:fc:0d}
I0415 23:54:39.548909   25200 main.go:141] libmachine: (functional-596616) DBG | domain functional-596616 has defined IP address 192.168.39.86 and MAC address 52:54:00:2d:fc:0d in network mk-functional-596616
I0415 23:54:39.549094   25200 main.go:141] libmachine: (functional-596616) Calling .GetSSHPort
I0415 23:54:39.549251   25200 main.go:141] libmachine: (functional-596616) Calling .GetSSHKeyPath
I0415 23:54:39.549469   25200 main.go:141] libmachine: (functional-596616) Calling .GetSSHUsername
I0415 23:54:39.549607   25200 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/functional-596616/id_rsa Username:docker}
I0415 23:54:39.654130   25200 ssh_runner.go:195] Run: sudo crictl images --output json
I0415 23:54:39.727595   25200 main.go:141] libmachine: Making call to close driver server
I0415 23:54:39.727622   25200 main.go:141] libmachine: (functional-596616) Calling .Close
I0415 23:54:39.727853   25200 main.go:141] libmachine: Successfully made call to close driver server
I0415 23:54:39.727871   25200 main.go:141] libmachine: Making call to close connection to plugin binary
I0415 23:54:39.727889   25200 main.go:141] libmachine: Making call to close driver server
I0415 23:54:39.727890   25200 main.go:141] libmachine: (functional-596616) DBG | Closing plugin on server side
I0415 23:54:39.727901   25200 main.go:141] libmachine: (functional-596616) Calling .Close
I0415 23:54:39.728210   25200 main.go:141] libmachine: Successfully made call to close driver server
I0415 23:54:39.728229   25200 main.go:141] libmachine: Making call to close connection to plugin binary
I0415 23:54:39.728275   25200 main.go:141] libmachine: (functional-596616) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-596616 image ls --format json --alsologtostderr:
[{"id":"8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a","registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88"],"repoTags":["registry.k8s.io/kube-scheduler:v1.29.3"],"size":"60724018"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"3aa305736b661c2274e91dae9254954e12475fbb2981292941cc9f2f5a157db6","repoDigests":["localhost/minikube-local-cache-test@sha256:4244940904f077aa1f87fbaff1a3e6f82ae7a95665093c97a4a3bb344515e6be"],"repoTags":["localhost/minikube-local-cache-test:functional-596616"],"size":"3328"},{"id":"a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392","repoDigests":["registry.k8s.io/kube-proxy@s
ha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d","registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863"],"repoTags":["registry.k8s.io/kube-proxy:v1.29.3"],"size":"83634073"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/
k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"si
ze":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533","repoDigests":["registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322","registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.29.3"],"size":"128508878"},{"id":"4950bb10b3f87e8d4a8f772a0d893
4625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b","repoDigests":["docker.io/library/nginx@sha256:9ff236ed47fe39cf1f0acf349d0e5137f8b8a6fd0b46e5117a401010e56222e1","docker.io/library/nginx@sha256:cd64407576751d9b9ba4924f758d3d39fe76a6e142c32169625b60934c95f057"],"repoTags":["docker.io/library/nginx:latest"],"size":"19
0874053"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606","registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.29.3"],"size":"123142962"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b
283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-596616 image ls --format json --alsologtostderr:
I0415 23:54:39.470367   25188 out.go:291] Setting OutFile to fd 1 ...
I0415 23:54:39.470492   25188 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:54:39.470502   25188 out.go:304] Setting ErrFile to fd 2...
I0415 23:54:39.470509   25188 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:54:39.470721   25188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
I0415 23:54:39.471502   25188 config.go:182] Loaded profile config "functional-596616": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0415 23:54:39.471674   25188 config.go:182] Loaded profile config "functional-596616": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0415 23:54:39.472246   25188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0415 23:54:39.472303   25188 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 23:54:39.491571   25188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38133
I0415 23:54:39.492057   25188 main.go:141] libmachine: () Calling .GetVersion
I0415 23:54:39.492613   25188 main.go:141] libmachine: Using API Version  1
I0415 23:54:39.492666   25188 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 23:54:39.493017   25188 main.go:141] libmachine: () Calling .GetMachineName
I0415 23:54:39.493217   25188 main.go:141] libmachine: (functional-596616) Calling .GetState
I0415 23:54:39.495082   25188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0415 23:54:39.495125   25188 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 23:54:39.509976   25188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
I0415 23:54:39.510347   25188 main.go:141] libmachine: () Calling .GetVersion
I0415 23:54:39.510831   25188 main.go:141] libmachine: Using API Version  1
I0415 23:54:39.510861   25188 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 23:54:39.511196   25188 main.go:141] libmachine: () Calling .GetMachineName
I0415 23:54:39.511374   25188 main.go:141] libmachine: (functional-596616) Calling .DriverName
I0415 23:54:39.511556   25188 ssh_runner.go:195] Run: systemctl --version
I0415 23:54:39.511587   25188 main.go:141] libmachine: (functional-596616) Calling .GetSSHHostname
I0415 23:54:39.514140   25188 main.go:141] libmachine: (functional-596616) DBG | domain functional-596616 has defined MAC address 52:54:00:2d:fc:0d in network mk-functional-596616
I0415 23:54:39.514609   25188 main.go:141] libmachine: (functional-596616) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:fc:0d", ip: ""} in network mk-functional-596616: {Iface:virbr1 ExpiryTime:2024-04-16 00:49:23 +0000 UTC Type:0 Mac:52:54:00:2d:fc:0d Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:functional-596616 Clientid:01:52:54:00:2d:fc:0d}
I0415 23:54:39.514630   25188 main.go:141] libmachine: (functional-596616) DBG | domain functional-596616 has defined IP address 192.168.39.86 and MAC address 52:54:00:2d:fc:0d in network mk-functional-596616
I0415 23:54:39.514766   25188 main.go:141] libmachine: (functional-596616) Calling .GetSSHPort
I0415 23:54:39.514913   25188 main.go:141] libmachine: (functional-596616) Calling .GetSSHKeyPath
I0415 23:54:39.515033   25188 main.go:141] libmachine: (functional-596616) Calling .GetSSHUsername
I0415 23:54:39.515118   25188 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/functional-596616/id_rsa Username:docker}
I0415 23:54:39.621243   25188 ssh_runner.go:195] Run: sudo crictl images --output json
I0415 23:54:39.683379   25188 main.go:141] libmachine: Making call to close driver server
I0415 23:54:39.683396   25188 main.go:141] libmachine: (functional-596616) Calling .Close
I0415 23:54:39.683654   25188 main.go:141] libmachine: Successfully made call to close driver server
I0415 23:54:39.683670   25188 main.go:141] libmachine: Making call to close connection to plugin binary
I0415 23:54:39.683682   25188 main.go:141] libmachine: Making call to close driver server
I0415 23:54:39.683688   25188 main.go:141] libmachine: (functional-596616) Calling .Close
I0415 23:54:39.683687   25188 main.go:141] libmachine: (functional-596616) DBG | Closing plugin on server side
I0415 23:54:39.683876   25188 main.go:141] libmachine: Successfully made call to close driver server
I0415 23:54:39.683892   25188 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-596616 image ls --format yaml --alsologtostderr:
- id: c613f16b664244b150d1c3644cbc387ec1fe8376377f9419992280eb4a82ff3b
repoDigests:
- docker.io/library/nginx@sha256:9ff236ed47fe39cf1f0acf349d0e5137f8b8a6fd0b46e5117a401010e56222e1
- docker.io/library/nginx@sha256:cd64407576751d9b9ba4924f758d3d39fe76a6e142c32169625b60934c95f057
repoTags:
- docker.io/library/nginx:latest
size: "190874053"
- id: 3aa305736b661c2274e91dae9254954e12475fbb2981292941cc9f2f5a157db6
repoDigests:
- localhost/minikube-local-cache-test@sha256:4244940904f077aa1f87fbaff1a3e6f82ae7a95665093c97a4a3bb344515e6be
repoTags:
- localhost/minikube-local-cache-test:functional-596616
size: "3328"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392
repoDigests:
- registry.k8s.io/kube-proxy@sha256:d137dd922e588abc7b0e2f20afd338065e9abccdecfe705abfb19f588fbac11d
- registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863
repoTags:
- registry.k8s.io/kube-proxy:v1.29.3
size: "83634073"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:21be2c03b528e582a63a41d8270f469ad1b24e2f6ba8238386768fc981ca1322
- registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c
repoTags:
- registry.k8s.io/kube-apiserver:v1.29.3
size: "128508878"
- id: 8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a
- registry.k8s.io/kube-scheduler@sha256:c6dae5df00e42512d2baa3e1e74efbf08bddd595e930123f6021f715198b8e88
repoTags:
- registry.k8s.io/kube-scheduler:v1.29.3
size: "60724018"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:495e03d609009733264502138f33ab4ebff55e4ccc34b51fce1dc48eba5aa606
- registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104
repoTags:
- registry.k8s.io/kube-controller-manager:v1.29.3
size: "123142962"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-596616 image ls --format yaml --alsologtostderr:
I0415 23:54:39.221741   25140 out.go:291] Setting OutFile to fd 1 ...
I0415 23:54:39.222089   25140 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:54:39.222102   25140 out.go:304] Setting ErrFile to fd 2...
I0415 23:54:39.222108   25140 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:54:39.222484   25140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
I0415 23:54:39.223281   25140 config.go:182] Loaded profile config "functional-596616": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0415 23:54:39.223431   25140 config.go:182] Loaded profile config "functional-596616": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0415 23:54:39.223959   25140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0415 23:54:39.224020   25140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 23:54:39.239575   25140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45819
I0415 23:54:39.240073   25140 main.go:141] libmachine: () Calling .GetVersion
I0415 23:54:39.240627   25140 main.go:141] libmachine: Using API Version  1
I0415 23:54:39.240653   25140 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 23:54:39.240992   25140 main.go:141] libmachine: () Calling .GetMachineName
I0415 23:54:39.241190   25140 main.go:141] libmachine: (functional-596616) Calling .GetState
I0415 23:54:39.243120   25140 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0415 23:54:39.243151   25140 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 23:54:39.256735   25140 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36765
I0415 23:54:39.257130   25140 main.go:141] libmachine: () Calling .GetVersion
I0415 23:54:39.259079   25140 main.go:141] libmachine: Using API Version  1
I0415 23:54:39.259100   25140 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 23:54:39.259437   25140 main.go:141] libmachine: () Calling .GetMachineName
I0415 23:54:39.259593   25140 main.go:141] libmachine: (functional-596616) Calling .DriverName
I0415 23:54:39.259827   25140 ssh_runner.go:195] Run: systemctl --version
I0415 23:54:39.259855   25140 main.go:141] libmachine: (functional-596616) Calling .GetSSHHostname
I0415 23:54:39.263511   25140 main.go:141] libmachine: (functional-596616) DBG | domain functional-596616 has defined MAC address 52:54:00:2d:fc:0d in network mk-functional-596616
I0415 23:54:39.263990   25140 main.go:141] libmachine: (functional-596616) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:fc:0d", ip: ""} in network mk-functional-596616: {Iface:virbr1 ExpiryTime:2024-04-16 00:49:23 +0000 UTC Type:0 Mac:52:54:00:2d:fc:0d Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:functional-596616 Clientid:01:52:54:00:2d:fc:0d}
I0415 23:54:39.264019   25140 main.go:141] libmachine: (functional-596616) DBG | domain functional-596616 has defined IP address 192.168.39.86 and MAC address 52:54:00:2d:fc:0d in network mk-functional-596616
I0415 23:54:39.264217   25140 main.go:141] libmachine: (functional-596616) Calling .GetSSHPort
I0415 23:54:39.264406   25140 main.go:141] libmachine: (functional-596616) Calling .GetSSHKeyPath
I0415 23:54:39.264718   25140 main.go:141] libmachine: (functional-596616) Calling .GetSSHUsername
I0415 23:54:39.264861   25140 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/functional-596616/id_rsa Username:docker}
I0415 23:54:39.347951   25140 ssh_runner.go:195] Run: sudo crictl images --output json
I0415 23:54:39.408829   25140 main.go:141] libmachine: Making call to close driver server
I0415 23:54:39.408851   25140 main.go:141] libmachine: (functional-596616) Calling .Close
I0415 23:54:39.409168   25140 main.go:141] libmachine: Successfully made call to close driver server
I0415 23:54:39.409189   25140 main.go:141] libmachine: Making call to close connection to plugin binary
I0415 23:54:39.409223   25140 main.go:141] libmachine: Making call to close driver server
I0415 23:54:39.409260   25140 main.go:141] libmachine: (functional-596616) Calling .Close
I0415 23:54:39.409523   25140 main.go:141] libmachine: (functional-596616) DBG | Closing plugin on server side
I0415 23:54:39.409595   25140 main.go:141] libmachine: Successfully made call to close driver server
I0415 23:54:39.409627   25140 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-596616 ssh pgrep buildkitd: exit status 1 (201.038027ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 image build -t localhost/my-image:functional-596616 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-596616 image build -t localhost/my-image:functional-596616 testdata/build --alsologtostderr: (2.940538295s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-596616 image build -t localhost/my-image:functional-596616 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0e11cf3275c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-596616
--> 506a3ff0155
Successfully tagged localhost/my-image:functional-596616
506a3ff01556118e473beb9e20da42950dbfeb80f0acd0a82fed6cd4431063da
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-596616 image build -t localhost/my-image:functional-596616 testdata/build --alsologtostderr:
I0415 23:54:40.034875   25265 out.go:291] Setting OutFile to fd 1 ...
I0415 23:54:40.035028   25265 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:54:40.035042   25265 out.go:304] Setting ErrFile to fd 2...
I0415 23:54:40.035047   25265 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0415 23:54:40.035296   25265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
I0415 23:54:40.035837   25265 config.go:182] Loaded profile config "functional-596616": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0415 23:54:40.036352   25265 config.go:182] Loaded profile config "functional-596616": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
I0415 23:54:40.036780   25265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0415 23:54:40.036850   25265 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 23:54:40.050736   25265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33811
I0415 23:54:40.051150   25265 main.go:141] libmachine: () Calling .GetVersion
I0415 23:54:40.051726   25265 main.go:141] libmachine: Using API Version  1
I0415 23:54:40.051752   25265 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 23:54:40.052144   25265 main.go:141] libmachine: () Calling .GetMachineName
I0415 23:54:40.052333   25265 main.go:141] libmachine: (functional-596616) Calling .GetState
I0415 23:54:40.054050   25265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0415 23:54:40.054098   25265 main.go:141] libmachine: Launching plugin server for driver kvm2
I0415 23:54:40.067939   25265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41841
I0415 23:54:40.068393   25265 main.go:141] libmachine: () Calling .GetVersion
I0415 23:54:40.068800   25265 main.go:141] libmachine: Using API Version  1
I0415 23:54:40.068820   25265 main.go:141] libmachine: () Calling .SetConfigRaw
I0415 23:54:40.069170   25265 main.go:141] libmachine: () Calling .GetMachineName
I0415 23:54:40.069367   25265 main.go:141] libmachine: (functional-596616) Calling .DriverName
I0415 23:54:40.069547   25265 ssh_runner.go:195] Run: systemctl --version
I0415 23:54:40.069565   25265 main.go:141] libmachine: (functional-596616) Calling .GetSSHHostname
I0415 23:54:40.072110   25265 main.go:141] libmachine: (functional-596616) DBG | domain functional-596616 has defined MAC address 52:54:00:2d:fc:0d in network mk-functional-596616
I0415 23:54:40.072482   25265 main.go:141] libmachine: (functional-596616) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2d:fc:0d", ip: ""} in network mk-functional-596616: {Iface:virbr1 ExpiryTime:2024-04-16 00:49:23 +0000 UTC Type:0 Mac:52:54:00:2d:fc:0d Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:functional-596616 Clientid:01:52:54:00:2d:fc:0d}
I0415 23:54:40.072507   25265 main.go:141] libmachine: (functional-596616) DBG | domain functional-596616 has defined IP address 192.168.39.86 and MAC address 52:54:00:2d:fc:0d in network mk-functional-596616
I0415 23:54:40.072635   25265 main.go:141] libmachine: (functional-596616) Calling .GetSSHPort
I0415 23:54:40.072775   25265 main.go:141] libmachine: (functional-596616) Calling .GetSSHKeyPath
I0415 23:54:40.072938   25265 main.go:141] libmachine: (functional-596616) Calling .GetSSHUsername
I0415 23:54:40.073087   25265 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/functional-596616/id_rsa Username:docker}
I0415 23:54:40.163778   25265 build_images.go:161] Building image from path: /tmp/build.135596576.tar
I0415 23:54:40.163882   25265 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0415 23:54:40.175043   25265 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.135596576.tar
I0415 23:54:40.180306   25265 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.135596576.tar: stat -c "%s %y" /var/lib/minikube/build/build.135596576.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.135596576.tar': No such file or directory
I0415 23:54:40.180335   25265 ssh_runner.go:362] scp /tmp/build.135596576.tar --> /var/lib/minikube/build/build.135596576.tar (3072 bytes)
I0415 23:54:40.208895   25265 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.135596576
I0415 23:54:40.222415   25265 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.135596576 -xf /var/lib/minikube/build/build.135596576.tar
I0415 23:54:40.235315   25265 crio.go:315] Building image: /var/lib/minikube/build/build.135596576
I0415 23:54:40.235391   25265 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-596616 /var/lib/minikube/build/build.135596576 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0415 23:54:42.894526   25265 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-596616 /var/lib/minikube/build/build.135596576 --cgroup-manager=cgroupfs: (2.659111545s)
I0415 23:54:42.894580   25265 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.135596576
I0415 23:54:42.907918   25265 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.135596576.tar
I0415 23:54:42.917722   25265 build_images.go:217] Built localhost/my-image:functional-596616 from /tmp/build.135596576.tar
I0415 23:54:42.917754   25265 build_images.go:133] succeeded building to: functional-596616
I0415 23:54:42.917759   25265 build_images.go:134] failed building to: 
I0415 23:54:42.917808   25265 main.go:141] libmachine: Making call to close driver server
I0415 23:54:42.917825   25265 main.go:141] libmachine: (functional-596616) Calling .Close
I0415 23:54:42.918058   25265 main.go:141] libmachine: Successfully made call to close driver server
I0415 23:54:42.918073   25265 main.go:141] libmachine: Making call to close connection to plugin binary
I0415 23:54:42.918080   25265 main.go:141] libmachine: Making call to close driver server
I0415 23:54:42.918095   25265 main.go:141] libmachine: (functional-596616) Calling .Close
I0415 23:54:42.918152   25265 main.go:141] libmachine: (functional-596616) DBG | Closing plugin on server side
I0415 23:54:42.918310   25265 main.go:141] libmachine: Successfully made call to close driver server
I0415 23:54:42.918329   25265 main.go:141] libmachine: Making call to close connection to plugin binary
I0415 23:54:42.918349   25265 main.go:141] libmachine: (functional-596616) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (4.66365348s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-596616
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 image load --daemon gcr.io/google-containers/addon-resizer:functional-596616 --alsologtostderr
2024/04/15 23:54:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-596616 image load --daemon gcr.io/google-containers/addon-resizer:functional-596616 --alsologtostderr: (5.19358635s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 image load --daemon gcr.io/google-containers/addon-resizer:functional-596616 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-596616 image load --daemon gcr.io/google-containers/addon-resizer:functional-596616 --alsologtostderr: (2.536869201s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.835790129s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-596616
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 image load --daemon gcr.io/google-containers/addon-resizer:functional-596616 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-596616 image load --daemon gcr.io/google-containers/addon-resizer:functional-596616 --alsologtostderr: (6.237408157s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 image rm gcr.io/google-containers/addon-resizer:functional-596616 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-596616
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-596616 image save --daemon gcr.io/google-containers/addon-resizer:functional-596616 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-596616
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.30s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-596616
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-596616
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-596616
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (208.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-694782 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0415 23:55:04.015406   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0415 23:57:20.169281   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0415 23:57:47.855596   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-694782 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m27.519584334s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (208.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-694782 -- rollout status deployment/busybox: (4.864847129s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- exec busybox-7fdf7869d9-bwtdm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- exec busybox-7fdf7869d9-mxz6n -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- exec busybox-7fdf7869d9-vsvrq -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- exec busybox-7fdf7869d9-bwtdm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- exec busybox-7fdf7869d9-mxz6n -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- exec busybox-7fdf7869d9-vsvrq -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- exec busybox-7fdf7869d9-bwtdm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- exec busybox-7fdf7869d9-mxz6n -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- exec busybox-7fdf7869d9-vsvrq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- exec busybox-7fdf7869d9-bwtdm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- exec busybox-7fdf7869d9-bwtdm -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- exec busybox-7fdf7869d9-mxz6n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- exec busybox-7fdf7869d9-mxz6n -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- exec busybox-7fdf7869d9-vsvrq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-694782 -- exec busybox-7fdf7869d9-vsvrq -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (45.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-694782 -v=7 --alsologtostderr
E0415 23:58:58.679797   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0415 23:58:58.685124   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0415 23:58:58.695415   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0415 23:58:58.715772   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0415 23:58:58.756846   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0415 23:58:58.837196   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0415 23:58:58.997625   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0415 23:58:59.318207   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0415 23:58:59.959274   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0415 23:59:01.239418   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0415 23:59:03.799759   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0415 23:59:08.920180   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-694782 -v=7 --alsologtostderr: (44.727330092s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (45.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-694782 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp testdata/cp-test.txt ha-694782:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp ha-694782:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4178900617/001/cp-test_ha-694782.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp ha-694782:/home/docker/cp-test.txt ha-694782-m02:/home/docker/cp-test_ha-694782_ha-694782-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m02 "sudo cat /home/docker/cp-test_ha-694782_ha-694782-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp ha-694782:/home/docker/cp-test.txt ha-694782-m03:/home/docker/cp-test_ha-694782_ha-694782-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m03 "sudo cat /home/docker/cp-test_ha-694782_ha-694782-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp ha-694782:/home/docker/cp-test.txt ha-694782-m04:/home/docker/cp-test_ha-694782_ha-694782-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m04 "sudo cat /home/docker/cp-test_ha-694782_ha-694782-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp testdata/cp-test.txt ha-694782-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp ha-694782-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4178900617/001/cp-test_ha-694782-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp ha-694782-m02:/home/docker/cp-test.txt ha-694782:/home/docker/cp-test_ha-694782-m02_ha-694782.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782 "sudo cat /home/docker/cp-test_ha-694782-m02_ha-694782.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp ha-694782-m02:/home/docker/cp-test.txt ha-694782-m03:/home/docker/cp-test_ha-694782-m02_ha-694782-m03.txt
E0415 23:59:19.161142   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m03 "sudo cat /home/docker/cp-test_ha-694782-m02_ha-694782-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp ha-694782-m02:/home/docker/cp-test.txt ha-694782-m04:/home/docker/cp-test_ha-694782-m02_ha-694782-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m04 "sudo cat /home/docker/cp-test_ha-694782-m02_ha-694782-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp testdata/cp-test.txt ha-694782-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp ha-694782-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4178900617/001/cp-test_ha-694782-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp ha-694782-m03:/home/docker/cp-test.txt ha-694782:/home/docker/cp-test_ha-694782-m03_ha-694782.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782 "sudo cat /home/docker/cp-test_ha-694782-m03_ha-694782.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp ha-694782-m03:/home/docker/cp-test.txt ha-694782-m02:/home/docker/cp-test_ha-694782-m03_ha-694782-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m02 "sudo cat /home/docker/cp-test_ha-694782-m03_ha-694782-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp ha-694782-m03:/home/docker/cp-test.txt ha-694782-m04:/home/docker/cp-test_ha-694782-m03_ha-694782-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m04 "sudo cat /home/docker/cp-test_ha-694782-m03_ha-694782-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp testdata/cp-test.txt ha-694782-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4178900617/001/cp-test_ha-694782-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt ha-694782:/home/docker/cp-test_ha-694782-m04_ha-694782.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782 "sudo cat /home/docker/cp-test_ha-694782-m04_ha-694782.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt ha-694782-m02:/home/docker/cp-test_ha-694782-m04_ha-694782-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m02 "sudo cat /home/docker/cp-test_ha-694782-m04_ha-694782-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 cp ha-694782-m04:/home/docker/cp-test.txt ha-694782-m03:/home/docker/cp-test_ha-694782-m04_ha-694782-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 ssh -n ha-694782-m03 "sudo cat /home/docker/cp-test_ha-694782-m04_ha-694782-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.498519133s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-694782 node delete m03 -v=7 --alsologtostderr: (16.538665622s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (383.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-694782 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0416 00:12:20.169299   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0416 00:13:58.679777   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0416 00:15:21.724661   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0416 00:17:20.168996   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-694782 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (6m22.340974375s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (383.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-694782 --control-plane -v=7 --alsologtostderr
E0416 00:18:58.679920   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-694782 --control-plane -v=7 --alsologtostderr: (1m11.5596978s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-694782 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.94s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-300310 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-300310 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (55.938982239s)
--- PASS: TestJSONOutput/start/Command (55.94s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-300310 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-300310 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (9.4s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-300310 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-300310 --output=json --user=testUser: (9.399792829s)
--- PASS: TestJSONOutput/stop/Command (9.40s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-081584 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-081584 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (74.390154ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4867b07a-5a9c-4a73-beab-2d81f0e7dbae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-081584] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e74dc33-9343-4bfe-8c4e-34ebd5f2c039","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18647"}}
	{"specversion":"1.0","id":"047f853b-5b01-45a4-9c0a-25f74f706c85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8fc009a7-1f98-4844-abe0-f92ef6937969","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig"}}
	{"specversion":"1.0","id":"856ed708-51b4-4f75-8604-a64adb6afc0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube"}}
	{"specversion":"1.0","id":"943f8bec-7163-4f02-b572-ecdcdf2251a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9b9d9eaf-7a76-4b6e-8af9-20af7da9a45d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"547b9687-5db3-44d9-befb-963702bebff1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-081584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-081584
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (86.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-478906 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-478906 --driver=kvm2  --container-runtime=crio: (41.424516552s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-481848 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-481848 --driver=kvm2  --container-runtime=crio: (42.105112703s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-478906
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-481848
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-481848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-481848
helpers_test.go:175: Cleaning up "first-478906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-478906
--- PASS: TestMinikubeProfile (86.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-683111 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0416 00:22:20.169449   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-683111 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.089409394s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-683111 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-683111 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.5s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-699861 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-699861 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.49693619s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-699861 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-699861 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-683111 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-699861 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-699861 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.35s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-699861
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-699861: (1.351870071s)
--- PASS: TestMountStart/serial/Stop (1.35s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.3s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-699861
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-699861: (22.303592613s)
--- PASS: TestMountStart/serial/RestartStopped (23.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-699861 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-699861 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (131.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-414194 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0416 00:23:58.680418   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0416 00:25:23.217844   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-414194 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m11.250706568s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (131.67s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-414194 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-414194 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-414194 -- rollout status deployment/busybox: (3.762647906s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-414194 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-414194 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-414194 -- exec busybox-7fdf7869d9-ms6xm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-414194 -- exec busybox-7fdf7869d9-sgkx5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-414194 -- exec busybox-7fdf7869d9-ms6xm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-414194 -- exec busybox-7fdf7869d9-sgkx5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-414194 -- exec busybox-7fdf7869d9-ms6xm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-414194 -- exec busybox-7fdf7869d9-sgkx5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.34s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-414194 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-414194 -- exec busybox-7fdf7869d9-ms6xm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-414194 -- exec busybox-7fdf7869d9-ms6xm -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-414194 -- exec busybox-7fdf7869d9-sgkx5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-414194 -- exec busybox-7fdf7869d9-sgkx5 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (40.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-414194 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-414194 -v 3 --alsologtostderr: (39.704533485s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (40.27s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-414194 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 cp testdata/cp-test.txt multinode-414194:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 ssh -n multinode-414194 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 cp multinode-414194:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1982584427/001/cp-test_multinode-414194.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 ssh -n multinode-414194 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 cp multinode-414194:/home/docker/cp-test.txt multinode-414194-m02:/home/docker/cp-test_multinode-414194_multinode-414194-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 ssh -n multinode-414194 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 ssh -n multinode-414194-m02 "sudo cat /home/docker/cp-test_multinode-414194_multinode-414194-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 cp multinode-414194:/home/docker/cp-test.txt multinode-414194-m03:/home/docker/cp-test_multinode-414194_multinode-414194-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 ssh -n multinode-414194 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 ssh -n multinode-414194-m03 "sudo cat /home/docker/cp-test_multinode-414194_multinode-414194-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 cp testdata/cp-test.txt multinode-414194-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 ssh -n multinode-414194-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 cp multinode-414194-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1982584427/001/cp-test_multinode-414194-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 ssh -n multinode-414194-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 cp multinode-414194-m02:/home/docker/cp-test.txt multinode-414194:/home/docker/cp-test_multinode-414194-m02_multinode-414194.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 ssh -n multinode-414194-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 ssh -n multinode-414194 "sudo cat /home/docker/cp-test_multinode-414194-m02_multinode-414194.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 cp multinode-414194-m02:/home/docker/cp-test.txt multinode-414194-m03:/home/docker/cp-test_multinode-414194-m02_multinode-414194-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 ssh -n multinode-414194-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 ssh -n multinode-414194-m03 "sudo cat /home/docker/cp-test_multinode-414194-m02_multinode-414194-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 cp testdata/cp-test.txt multinode-414194-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 ssh -n multinode-414194-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 cp multinode-414194-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1982584427/001/cp-test_multinode-414194-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 ssh -n multinode-414194-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 cp multinode-414194-m03:/home/docker/cp-test.txt multinode-414194:/home/docker/cp-test_multinode-414194-m03_multinode-414194.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 ssh -n multinode-414194-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 ssh -n multinode-414194 "sudo cat /home/docker/cp-test_multinode-414194-m03_multinode-414194.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 cp multinode-414194-m03:/home/docker/cp-test.txt multinode-414194-m02:/home/docker/cp-test_multinode-414194-m03_multinode-414194-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 ssh -n multinode-414194-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 ssh -n multinode-414194-m02 "sudo cat /home/docker/cp-test_multinode-414194-m03_multinode-414194-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-414194 node stop m03: (1.466685726s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-414194 status: exit status 7 (427.064349ms)

                                                
                                                
-- stdout --
	multinode-414194
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-414194-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-414194-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-414194 status --alsologtostderr: exit status 7 (439.758674ms)

                                                
                                                
-- stdout --
	multinode-414194
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-414194-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-414194-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:26:31.680242   43213 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:26:31.680371   43213 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:26:31.680391   43213 out.go:304] Setting ErrFile to fd 2...
	I0416 00:26:31.680398   43213 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:26:31.680619   43213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:26:31.680801   43213 out.go:298] Setting JSON to false
	I0416 00:26:31.680835   43213 mustload.go:65] Loading cluster: multinode-414194
	I0416 00:26:31.680926   43213 notify.go:220] Checking for updates...
	I0416 00:26:31.681247   43213 config.go:182] Loaded profile config "multinode-414194": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:26:31.681265   43213 status.go:255] checking status of multinode-414194 ...
	I0416 00:26:31.681651   43213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:26:31.681718   43213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:26:31.698073   43213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45985
	I0416 00:26:31.698443   43213 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:26:31.698979   43213 main.go:141] libmachine: Using API Version  1
	I0416 00:26:31.698999   43213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:26:31.699356   43213 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:26:31.699599   43213 main.go:141] libmachine: (multinode-414194) Calling .GetState
	I0416 00:26:31.701301   43213 status.go:330] multinode-414194 host status = "Running" (err=<nil>)
	I0416 00:26:31.701319   43213 host.go:66] Checking if "multinode-414194" exists ...
	I0416 00:26:31.701615   43213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:26:31.701647   43213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:26:31.716302   43213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41077
	I0416 00:26:31.716684   43213 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:26:31.717195   43213 main.go:141] libmachine: Using API Version  1
	I0416 00:26:31.717227   43213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:26:31.717571   43213 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:26:31.717799   43213 main.go:141] libmachine: (multinode-414194) Calling .GetIP
	I0416 00:26:31.720252   43213 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:26:31.720660   43213 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:26:31.720694   43213 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:26:31.720822   43213 host.go:66] Checking if "multinode-414194" exists ...
	I0416 00:26:31.721099   43213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:26:31.721137   43213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:26:31.736035   43213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37755
	I0416 00:26:31.736445   43213 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:26:31.736892   43213 main.go:141] libmachine: Using API Version  1
	I0416 00:26:31.736912   43213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:26:31.737314   43213 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:26:31.737590   43213 main.go:141] libmachine: (multinode-414194) Calling .DriverName
	I0416 00:26:31.737794   43213 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:26:31.737812   43213 main.go:141] libmachine: (multinode-414194) Calling .GetSSHHostname
	I0416 00:26:31.740413   43213 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:26:31.740783   43213 main.go:141] libmachine: (multinode-414194) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:26:d7", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:23:38 +0000 UTC Type:0 Mac:52:54:00:13:26:d7 Iaid: IPaddr:192.168.39.140 Prefix:24 Hostname:multinode-414194 Clientid:01:52:54:00:13:26:d7}
	I0416 00:26:31.740816   43213 main.go:141] libmachine: (multinode-414194) DBG | domain multinode-414194 has defined IP address 192.168.39.140 and MAC address 52:54:00:13:26:d7 in network mk-multinode-414194
	I0416 00:26:31.740939   43213 main.go:141] libmachine: (multinode-414194) Calling .GetSSHPort
	I0416 00:26:31.741109   43213 main.go:141] libmachine: (multinode-414194) Calling .GetSSHKeyPath
	I0416 00:26:31.741291   43213 main.go:141] libmachine: (multinode-414194) Calling .GetSSHUsername
	I0416 00:26:31.741453   43213 sshutil.go:53] new ssh client: &{IP:192.168.39.140 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/multinode-414194/id_rsa Username:docker}
	I0416 00:26:31.832726   43213 ssh_runner.go:195] Run: systemctl --version
	I0416 00:26:31.839020   43213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:26:31.857050   43213 kubeconfig.go:125] found "multinode-414194" server: "https://192.168.39.140:8443"
	I0416 00:26:31.857090   43213 api_server.go:166] Checking apiserver status ...
	I0416 00:26:31.857132   43213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0416 00:26:31.873811   43213 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1189/cgroup
	W0416 00:26:31.884989   43213 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1189/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0416 00:26:31.885053   43213 ssh_runner.go:195] Run: ls
	I0416 00:26:31.889834   43213 api_server.go:253] Checking apiserver healthz at https://192.168.39.140:8443/healthz ...
	I0416 00:26:31.894064   43213 api_server.go:279] https://192.168.39.140:8443/healthz returned 200:
	ok
	I0416 00:26:31.894088   43213 status.go:422] multinode-414194 apiserver status = Running (err=<nil>)
	I0416 00:26:31.894099   43213 status.go:257] multinode-414194 status: &{Name:multinode-414194 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:26:31.894123   43213 status.go:255] checking status of multinode-414194-m02 ...
	I0416 00:26:31.894505   43213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:26:31.894556   43213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:26:31.909290   43213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42879
	I0416 00:26:31.909718   43213 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:26:31.910230   43213 main.go:141] libmachine: Using API Version  1
	I0416 00:26:31.910254   43213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:26:31.910589   43213 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:26:31.910763   43213 main.go:141] libmachine: (multinode-414194-m02) Calling .GetState
	I0416 00:26:31.912233   43213 status.go:330] multinode-414194-m02 host status = "Running" (err=<nil>)
	I0416 00:26:31.912249   43213 host.go:66] Checking if "multinode-414194-m02" exists ...
	I0416 00:26:31.912532   43213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:26:31.912576   43213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:26:31.927265   43213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43703
	I0416 00:26:31.927665   43213 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:26:31.928129   43213 main.go:141] libmachine: Using API Version  1
	I0416 00:26:31.928150   43213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:26:31.928435   43213 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:26:31.928627   43213 main.go:141] libmachine: (multinode-414194-m02) Calling .GetIP
	I0416 00:26:31.931328   43213 main.go:141] libmachine: (multinode-414194-m02) DBG | domain multinode-414194-m02 has defined MAC address 52:54:00:dc:fe:27 in network mk-multinode-414194
	I0416 00:26:31.931708   43213 main.go:141] libmachine: (multinode-414194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:fe:27", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:25:09 +0000 UTC Type:0 Mac:52:54:00:dc:fe:27 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-414194-m02 Clientid:01:52:54:00:dc:fe:27}
	I0416 00:26:31.931736   43213 main.go:141] libmachine: (multinode-414194-m02) DBG | domain multinode-414194-m02 has defined IP address 192.168.39.81 and MAC address 52:54:00:dc:fe:27 in network mk-multinode-414194
	I0416 00:26:31.931861   43213 host.go:66] Checking if "multinode-414194-m02" exists ...
	I0416 00:26:31.932131   43213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:26:31.932162   43213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:26:31.946626   43213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38055
	I0416 00:26:31.947077   43213 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:26:31.947481   43213 main.go:141] libmachine: Using API Version  1
	I0416 00:26:31.947506   43213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:26:31.947810   43213 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:26:31.947978   43213 main.go:141] libmachine: (multinode-414194-m02) Calling .DriverName
	I0416 00:26:31.948145   43213 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0416 00:26:31.948165   43213 main.go:141] libmachine: (multinode-414194-m02) Calling .GetSSHHostname
	I0416 00:26:31.950714   43213 main.go:141] libmachine: (multinode-414194-m02) DBG | domain multinode-414194-m02 has defined MAC address 52:54:00:dc:fe:27 in network mk-multinode-414194
	I0416 00:26:31.951175   43213 main.go:141] libmachine: (multinode-414194-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:fe:27", ip: ""} in network mk-multinode-414194: {Iface:virbr1 ExpiryTime:2024-04-16 01:25:09 +0000 UTC Type:0 Mac:52:54:00:dc:fe:27 Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-414194-m02 Clientid:01:52:54:00:dc:fe:27}
	I0416 00:26:31.951203   43213 main.go:141] libmachine: (multinode-414194-m02) DBG | domain multinode-414194-m02 has defined IP address 192.168.39.81 and MAC address 52:54:00:dc:fe:27 in network mk-multinode-414194
	I0416 00:26:31.951334   43213 main.go:141] libmachine: (multinode-414194-m02) Calling .GetSSHPort
	I0416 00:26:31.951516   43213 main.go:141] libmachine: (multinode-414194-m02) Calling .GetSSHKeyPath
	I0416 00:26:31.951686   43213 main.go:141] libmachine: (multinode-414194-m02) Calling .GetSSHUsername
	I0416 00:26:31.951852   43213 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18647-7542/.minikube/machines/multinode-414194-m02/id_rsa Username:docker}
	I0416 00:26:32.032816   43213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0416 00:26:32.046673   43213 status.go:257] multinode-414194-m02 status: &{Name:multinode-414194-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0416 00:26:32.046725   43213 status.go:255] checking status of multinode-414194-m03 ...
	I0416 00:26:32.047123   43213 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0416 00:26:32.047169   43213 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0416 00:26:32.062405   43213 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39261
	I0416 00:26:32.062809   43213 main.go:141] libmachine: () Calling .GetVersion
	I0416 00:26:32.063264   43213 main.go:141] libmachine: Using API Version  1
	I0416 00:26:32.063282   43213 main.go:141] libmachine: () Calling .SetConfigRaw
	I0416 00:26:32.063619   43213 main.go:141] libmachine: () Calling .GetMachineName
	I0416 00:26:32.063821   43213 main.go:141] libmachine: (multinode-414194-m03) Calling .GetState
	I0416 00:26:32.065360   43213 status.go:330] multinode-414194-m03 host status = "Stopped" (err=<nil>)
	I0416 00:26:32.065376   43213 status.go:343] host is not running, skipping remaining checks
	I0416 00:26:32.065384   43213 status.go:257] multinode-414194-m03 status: &{Name:multinode-414194-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-414194 node start m03 -v=7 --alsologtostderr: (29.124829139s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-414194 node delete m03: (1.940329234s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (175.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-414194 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0416 00:37:20.168967   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-414194 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m54.761120535s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-414194 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (175.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (50.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-414194
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-414194-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-414194-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (72.684783ms)

                                                
                                                
-- stdout --
	* [multinode-414194-m02] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-414194-m02' is duplicated with machine name 'multinode-414194-m02' in profile 'multinode-414194'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-414194-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-414194-m03 --driver=kvm2  --container-runtime=crio: (49.142216832s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-414194
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-414194: exit status 80 (227.30395ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-414194 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-414194-m03 already exists in multinode-414194-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-414194-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (50.29s)

                                                
                                    
x
+
TestScheduledStopUnix (118.8s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-074680 --memory=2048 --driver=kvm2  --container-runtime=crio
E0416 00:42:03.218198   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0416 00:42:20.169369   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-074680 --memory=2048 --driver=kvm2  --container-runtime=crio: (47.096371528s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-074680 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-074680 -n scheduled-stop-074680
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-074680 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-074680 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-074680 -n scheduled-stop-074680
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-074680
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-074680 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0416 00:43:58.680339   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-074680
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-074680: exit status 7 (75.145246ms)

                                                
                                                
-- stdout --
	scheduled-stop-074680
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-074680 -n scheduled-stop-074680
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-074680 -n scheduled-stop-074680: exit status 7 (74.560831ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-074680" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-074680
--- PASS: TestScheduledStopUnix (118.80s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (215.87s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2704414785 start -p running-upgrade-986638 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2704414785 start -p running-upgrade-986638 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m11.156343657s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-986638 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-986638 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m21.142630304s)
helpers_test.go:175: Cleaning up "running-upgrade-986638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-986638
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-986638: (1.238466878s)
--- PASS: TestRunningBinaryUpgrade (215.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-869916 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-869916 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (91.787378ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-869916] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (101.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-869916 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-869916 --driver=kvm2  --container-runtime=crio: (1m41.349267584s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-869916 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (101.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-869916 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-869916 --no-kubernetes --driver=kvm2  --container-runtime=crio: (21.992547019s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-869916 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-869916 status -o json: exit status 2 (236.841761ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-869916","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-869916
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-869916: (1.032783269s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (105.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4120474325 start -p stopped-upgrade-485335 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4120474325 start -p stopped-upgrade-485335 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (57.049651682s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4120474325 -p stopped-upgrade-485335 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4120474325 -p stopped-upgrade-485335 stop: (2.142711417s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-485335 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-485335 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.433327657s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (105.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (35.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-869916 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-869916 --no-kubernetes --driver=kvm2  --container-runtime=crio: (35.762006468s)
--- PASS: TestNoKubernetes/serial/Start (35.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-869916 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-869916 "sudo systemctl is-active --quiet service kubelet": exit status 1 (215.863247ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (11.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (4.941756846s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (6.177608378s)
--- PASS: TestNoKubernetes/serial/ProfileList (11.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-869916
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-869916: (1.459732869s)
--- PASS: TestNoKubernetes/serial/Stop (1.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (32.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-869916 --driver=kvm2  --container-runtime=crio
E0416 00:47:20.169281   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-869916 --driver=kvm2  --container-runtime=crio: (32.83529221s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (32.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-869916 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-869916 "sudo systemctl is-active --quiet service kubelet": exit status 1 (208.769518ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-485335
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-381983 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-381983 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (121.070033ms)

                                                
                                                
-- stdout --
	* [false-381983] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18647
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0416 00:47:37.173900   53998 out.go:291] Setting OutFile to fd 1 ...
	I0416 00:47:37.174032   53998 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:47:37.174042   53998 out.go:304] Setting ErrFile to fd 2...
	I0416 00:47:37.174049   53998 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0416 00:47:37.174356   53998 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18647-7542/.minikube/bin
	I0416 00:47:37.175403   53998 out.go:298] Setting JSON to false
	I0416 00:47:37.176876   53998 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5401,"bootTime":1713223056,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1055-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0416 00:47:37.176969   53998 start.go:139] virtualization: kvm guest
	I0416 00:47:37.179768   53998 out.go:177] * [false-381983] minikube v1.33.0-beta.0 on Ubuntu 20.04 (kvm/amd64)
	I0416 00:47:37.181179   53998 out.go:177]   - MINIKUBE_LOCATION=18647
	I0416 00:47:37.182478   53998 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0416 00:47:37.181127   53998 notify.go:220] Checking for updates...
	I0416 00:47:37.183704   53998 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18647-7542/kubeconfig
	I0416 00:47:37.185019   53998 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18647-7542/.minikube
	I0416 00:47:37.186464   53998 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0416 00:47:37.187985   53998 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0416 00:47:37.189798   53998 config.go:182] Loaded profile config "force-systemd-env-787358": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.3
	I0416 00:47:37.189887   53998 config.go:182] Loaded profile config "kubernetes-upgrade-497059": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0416 00:47:37.189966   53998 config.go:182] Loaded profile config "running-upgrade-986638": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0416 00:47:37.190043   53998 driver.go:392] Setting default libvirt URI to qemu:///system
	I0416 00:47:37.227315   53998 out.go:177] * Using the kvm2 driver based on user configuration
	I0416 00:47:37.228875   53998 start.go:297] selected driver: kvm2
	I0416 00:47:37.228888   53998 start.go:901] validating driver "kvm2" against <nil>
	I0416 00:47:37.228900   53998 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0416 00:47:37.231234   53998 out.go:177] 
	W0416 00:47:37.232598   53998 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0416 00:47:37.233960   53998 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-381983 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-381983

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-381983

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-381983

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-381983

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-381983

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-381983

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-381983

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-381983

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-381983

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-381983

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-381983

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-381983" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-381983" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-381983

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381983"

                                                
                                                
----------------------- debugLogs end: false-381983 [took: 3.224000636s] --------------------------------
helpers_test.go:175: Cleaning up "false-381983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-381983
--- PASS: TestNetworkPlugins/group/false (3.49s)

                                                
                                    
x
+
TestPause/serial/Start (87.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-214771 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-214771 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m27.021221329s)
--- PASS: TestPause/serial/Start (87.02s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (50.68s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-214771 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-214771 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.665842716s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (50.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (92.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-572602 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-572602 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2: (1m32.922497073s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (92.92s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-214771 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-214771 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-214771 --output=json --layout=cluster: exit status 2 (244.844602ms)

                                                
                                                
-- stdout --
	{"Name":"pause-214771","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-214771","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-214771 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.85s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-214771 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.85s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.02s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-214771 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-214771 --alsologtostderr -v=5: (1.01743192s)
--- PASS: TestPause/serial/DeletePaused (1.02s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-653942 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-653942 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (1m22.133497119s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-653942 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6af3a222-1adb-435d-97eb-965460f887c1] Pending
helpers_test.go:344: "busybox" [6af3a222-1adb-435d-97eb-965460f887c1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6af3a222-1adb-435d-97eb-965460f887c1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004119178s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-653942 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-572602 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [351a08e2-eafd-45cf-ab82-dd764d11a22e] Pending
helpers_test.go:344: "busybox" [351a08e2-eafd-45cf-ab82-dd764d11a22e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [351a08e2-eafd-45cf-ab82-dd764d11a22e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003469789s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-572602 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-653942 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-653942 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-572602 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-572602 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (59.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-012509 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-012509 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2: (59.556465896s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (59.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-012509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-012509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.066275443s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-012509 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-012509 --alsologtostderr -v=3: (11.316242456s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-012509 -n newest-cni-012509
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-012509 -n newest-cni-012509: exit status 7 (70.856728ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-012509 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (40.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-012509 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2
E0416 00:53:58.680576   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-012509 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2: (40.331443177s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-012509 -n newest-cni-012509
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (40.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-012509 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-012509 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-012509 -n newest-cni-012509
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-012509 -n newest-cni-012509: exit status 2 (234.737531ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-012509 -n newest-cni-012509
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-012509 -n newest-cni-012509: exit status 2 (238.099771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-012509 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-012509 -n newest-cni-012509
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-012509 -n newest-cni-012509
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (102.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-617092 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-617092 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (1m42.943104257s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (102.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (684.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-653942 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-653942 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (11m24.630294312s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-653942 -n default-k8s-diff-port-653942
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (684.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (627s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-572602 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-572602 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.0-rc.2: (10m26.725894572s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-572602 -n no-preload-572602
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (627.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-617092 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9eb56ca7-e52a-440a-9e53-f73a26e7608e] Pending
helpers_test.go:344: "busybox" [9eb56ca7-e52a-440a-9e53-f73a26e7608e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9eb56ca7-e52a-440a-9e53-f73a26e7608e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004243489s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-617092 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-617092 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-617092 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-800769 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-800769 --alsologtostderr -v=3: (2.548647882s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-800769 -n old-k8s-version-800769
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-800769 -n old-k8s-version-800769: exit status 7 (74.392222ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-800769 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (422.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-617092 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3
E0416 00:58:43.219212   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0416 00:58:58.679909   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0416 01:02:20.169387   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/addons-045739/client.crt: no such file or directory
E0416 01:03:58.679904   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-617092 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.3: (7m1.976989817s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-617092 -n embed-certs-617092
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (422.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (99.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-381983 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-381983 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m39.320182149s)
--- PASS: TestNetworkPlugins/group/auto/Start (99.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-381983 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-381983 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m10.979803433s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (115.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-381983 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-381983 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m55.665787717s)
--- PASS: TestNetworkPlugins/group/calico/Start (115.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2z769" [3f03169a-8e5f-4dfc-a90c-1fab436ae9b5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005708249s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-381983 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-381983 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-47szx" [8855ff0f-68dc-4bf9-b862-577ddb16a3c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-47szx" [8855ff0f-68dc-4bf9-b862-577ddb16a3c6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.005538604s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-381983 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-381983 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2z7q8" [867d8594-93c5-4d43-bdf0-dd87882c0859] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0416 01:22:00.419725   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.crt: no such file or directory
E0416 01:22:00.425022   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.crt: no such file or directory
E0416 01:22:00.435311   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.crt: no such file or directory
E0416 01:22:00.455597   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.crt: no such file or directory
E0416 01:22:00.495909   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.crt: no such file or directory
E0416 01:22:00.576251   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.crt: no such file or directory
E0416 01:22:00.736771   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.crt: no such file or directory
E0416 01:22:01.057834   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.crt: no such file or directory
E0416 01:22:01.698485   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.crt: no such file or directory
E0416 01:22:01.727483   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
E0416 01:22:02.979074   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-2z7q8" [867d8594-93c5-4d43-bdf0-dd87882c0859] Running
E0416 01:22:06.158267   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/client.crt: no such file or directory
E0416 01:22:07.438664   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/client.crt: no such file or directory
E0416 01:22:09.999789   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/client.crt: no such file or directory
E0416 01:22:10.660547   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.006252747s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-381983 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-381983 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-381983 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-381983 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-381983 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-381983 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-381983 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0416 01:22:25.360571   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-381983 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m25.998856174s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (86.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-381983 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-381983 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m20.900454587s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rcbl5" [0dddf094-0c51-46bb-9ae4-e9f7b3f02969] Running
E0416 01:22:41.381876   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005480783s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-381983 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-381983 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-r98g8" [11efaf26-e6a6-4994-bd1b-cb5419832f50] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0416 01:22:45.841212   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-r98g8" [11efaf26-e6a6-4994-bd1b-cb5419832f50] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.128130327s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-381983 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-381983 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-381983 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (86.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-381983 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0416 01:23:16.381616   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.crt: no such file or directory
E0416 01:23:16.386995   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.crt: no such file or directory
E0416 01:23:16.397246   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.crt: no such file or directory
E0416 01:23:16.417624   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.crt: no such file or directory
E0416 01:23:16.457947   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.crt: no such file or directory
E0416 01:23:16.538359   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.crt: no such file or directory
E0416 01:23:16.698751   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.crt: no such file or directory
E0416 01:23:17.018924   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.crt: no such file or directory
E0416 01:23:17.659890   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.crt: no such file or directory
E0416 01:23:18.940506   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.crt: no such file or directory
E0416 01:23:21.501199   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.crt: no such file or directory
E0416 01:23:22.342366   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.crt: no such file or directory
E0416 01:23:26.621853   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.crt: no such file or directory
E0416 01:23:26.801427   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/client.crt: no such file or directory
E0416 01:23:36.862263   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-381983 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m26.656353143s)
--- PASS: TestNetworkPlugins/group/flannel/Start (86.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-381983 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-381983 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wzpnl" [30e27869-e347-4966-a4e1-e503680d8aeb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wzpnl" [30e27869-e347-4966-a4e1-e503680d8aeb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.00622795s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-381983 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-381983 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jd9jc" [be80e7f9-e4de-480d-9c59-bf5c77b2a1b8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jd9jc" [be80e7f9-e4de-480d-9c59-bf5c77b2a1b8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004672749s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-617092 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-617092 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-617092 --alsologtostderr -v=1: (1.037080069s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-617092 -n embed-certs-617092
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-617092 -n embed-certs-617092: exit status 2 (289.219997ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-617092 -n embed-certs-617092
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-617092 -n embed-certs-617092: exit status 2 (289.710606ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-617092 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-617092 -n embed-certs-617092
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-617092 -n embed-certs-617092
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-381983 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0416 01:23:58.679961   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/functional-596616/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-381983 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m0.132409238s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-381983 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-381983 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-381983 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-381983 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-381983 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-381983 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-zb296" [d90cd060-faed-4229-bdb5-ffb1539e215a] Running
E0416 01:24:38.303585   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/old-k8s-version-800769/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00783931s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-381983 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-381983 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-r8nk8" [6801805b-f7d2-40bf-8e2d-7a22a2339871] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0416 01:24:44.263362   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/default-k8s-diff-port-653942/client.crt: no such file or directory
E0416 01:24:48.721878   14897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/no-preload-572602/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-r8nk8" [6801805b-f7d2-40bf-8e2d-7a22a2339871] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004885803s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-381983 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-381983 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-381983 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-381983 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-381983 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8tng5" [f54be150-e6f0-4a3e-9463-7ba87e2fb003] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8tng5" [f54be150-e6f0-4a3e-9463-7ba87e2fb003] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004835749s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-381983 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-381983 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-381983 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (39/327)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.29.3/cached-images 0
15 TestDownloadOnly/v1.29.3/binaries 0
16 TestDownloadOnly/v1.29.3/kubectl 0
23 TestDownloadOnly/v1.30.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.30.0-rc.2/binaries 0
25 TestDownloadOnly/v1.30.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
134 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
135 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
138 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
275 TestStartStop/group/disable-driver-mounts 0.14
280 TestNetworkPlugins/group/kubenet 3.51
288 TestNetworkPlugins/group/cilium 3.75
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-988802" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-988802
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-381983 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-381983

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-381983

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-381983

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-381983

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-381983

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-381983

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-381983

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-381983

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-381983

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-381983

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-381983

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-381983" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-381983" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18647-7542/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 16 Apr 2024 00:46:58 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: cluster_info
server: https://192.168.72.21:8443
name: running-upgrade-986638
contexts:
- context:
cluster: running-upgrade-986638
extensions:
- extension:
last-update: Tue, 16 Apr 2024 00:46:58 UTC
provider: minikube.sigs.k8s.io
version: v1.33.0-beta.0
name: context_info
namespace: default
user: running-upgrade-986638
name: running-upgrade-986638
current-context: ""
kind: Config
preferences: {}
users:
- name: running-upgrade-986638
user:
client-certificate: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/running-upgrade-986638/client.crt
client-key: /home/jenkins/minikube-integration/18647-7542/.minikube/profiles/running-upgrade-986638/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-381983

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381983"

                                                
                                                
----------------------- debugLogs end: kubenet-381983 [took: 3.354232413s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-381983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-381983
--- SKIP: TestNetworkPlugins/group/kubenet (3.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-381983 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-381983

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-381983

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-381983

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-381983

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-381983

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-381983

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-381983

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-381983

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-381983

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-381983

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-381983

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-381983" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-381983

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-381983

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-381983

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-381983

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-381983" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-381983" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-381983

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-381983" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381983"

                                                
                                                
----------------------- debugLogs end: cilium-381983 [took: 3.611427912s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-381983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-381983
--- SKIP: TestNetworkPlugins/group/cilium (3.75s)

                                                
                                    
Copied to clipboard